Wednesday, 18 June 2008

No Mouse or Keyboard detected - press F1 to continue

I've been wondering how much longer the vast majority of people who use computer based equipment will continue to use a mouse and keyboard as their primary input devices.

iPhone users, you probably know what I mean already, huh?

While the keyboard has served us well, and the mouse is great for pointing and clicking, neither of them strike me as intuitive interfaces for deeply immersive user interfaces; and that's probably because of the misdirection involved in using them. When I type at my PC, I look at the screen. My fingers tap away at a keyboard, and it is only through years of practice (and of classical piano training) that I can manage to reconcile, fast, the fact that the input device requires separate neural processing to the output device.

Likewise with the mouse. For navigating around two dimensional, hypertextually linked applications, it's OK I guess, but as soon as I want to attempt to draw something, or zoom in on an image or a section of the screen I'm a bit stuck, and have to often co-ordinate keyboard and mouse to make things work. Mac users (least, those who haven't bothered to get a two key mouse) know this more than most - press CTRL for right click? Who thought that up? I understand that the Mac is partly about making simple tasks simple, but I reckon most people can grok at an intuitive level the concept of "left click for action, right click for context".

Saying that, I've never quite mastered the middle button on a mouse, except when it serves as a wheel as well. What was that third button for? It's like the third pedal on a piano you occasionally come across. What the hell is that for? Never saw a piece of music that refers to it in 14 years of classical training, or since.

I digress. I saw this little Minority-Touch-esque video yesterday and wondered if the time is right yet to reposition my career as a "Multi Dimension & Multi Touch Interface Human Design Specialist"

That's a career which doesn't yet exist. But it will.

Now naysayers will come along and tell me that they don't anticipate that they'll ever stop typing. Folk were loath to leave quill and ink behind too. I must admit, I don't quite know what will replace my 70+ words per minute typing speed with something that doesn't require me to dictate (still haven't got my head round that one, not for want of trying - the words seem to flow better when I type, and I don't have to go back and delete all the "ums" and "erms" and "ahs")

But what I do know is that the amount of time I spend interacting with data visually is increasing.

I love my Squeezebox, it's a great way of getting audio round the house without having to run computers or hard drives in the front room. The next version, the Squeezbox Duet has this great remote control which allows me to navigate by album cover. The remote itself is a Wi-Fi enabled device. (I'll post more on exposing your music collection over the internet using SqueezeCenter another time)

Squeezebox Duet Network Music System

But it's still not quite CoverFlow-As-Remote-Control. And CoverFlow still sucks at letting me browse through my extensive (3000+ albums) digital music collection

Now, what I want is an interface that looks like a CD collection, but in glorious Multi-D. Where I can navigate it based on all manner of factors, Where I can Zoom in Deep. Where I can move from one "room" to another. Where I can jump straight to an artists web site. Or all manner of interactions when I treat items in my music collection as Social Objects. And where I can manage this on a number of devices, from a wall mounted screen, to a PDA, to a headset based experience (you have to check this out)

Over the last decade and a half of my career in software I've been building flat, boring user interfaces, either for the web, or for the desktop. The next generation is not gonna be happy with that. They will expect their over-specced, highly connected, under-priced equipment to do more for them than that. They'll be comfortable with augmented reality (and may even be as lost without their overlay glasses as they are today without their mobile phones). They'll be over the concept of "media ownership" and will expect things to be shareable. Like playing pong over multiple iPhones

I know for a fact that there are not nearly enough User Experience (UX) specialists in the world. And this is probably because all the stuff beneath the UX, stuff to do with data persistence, graphics rendering engines, networks, systems integration etc, has been more than enough for the world of Software Engineers for the last 15 years or so. But I forsee a world coming, and soon, where many of the Hard Data Problems all start to disappear, and where aggregation and rendering of these ceases to happen server side, and instead, it will be smart, graphically rich clients, with multi-touch interfaces bringing it all together in the user experience. What is a software guy to do? Well, until the tools and frameworks are at a high enough level that anyone can build those experiences, I think there is still work for us.

So, blogosphere, if I were to reinvent myself as a "Multi Dimension & Multi Touch Interface Human Design Specialist" where should I start? What should I read? What should I be learning? Bear in my mind that I have very little graphic design experience, and it's not really prettifying stuff that interests me, it's in making things usable in ways we are not yet familiar.

Here's some more things that inspire me in this space

Silverlight Deep Zoom

3D in Flash v.Next

onorientationchange

multitouch in javascript

Microsoft LaserTouch

Project Looking Glass (seems dormant?)

What do you think? Ask yourself again at the end of a day of RSI inducing activity

1 comment:

DE said...

See what Nintendo are up to.

Have a brief look at http://research.microsoft.com/hci2020/

Talk to designers and architects; they are a step closer to virtual space.

Tim Stevens

Tim Stevens
Work
Consume
Obey
Be Silent
Die