Intimate interfaces for wearable computing

Posted August 30, 2005

The last wave of papers from the now-defunct Media Lab Europe has started to hit the journals, and one of them caught my eye. This paper (PDF) by Costanza et al. is about trials with a basic covert input system using muscle-contraction sensing.

The Achilles heel of wearable computing has always been the physical interface: how to get output to the user, and how to get input from them. The actual computing part of it is much easier, since kick-ass silicon is abundant these days, but at some point you need to present information to the user and get information back from them. Standard keyboards, mice, and monitors are pretty much out of the question for a mobile situation, and a voice interface is hobbled by social needs -- if you're taking notes in a meeting, you can't be talking to your wearable to do it. The classic answer to this is a heads-up display and a small chording keyboard/pointing device like the Twiddler.

To some extent, this is a moot issue, since in the long term society will adapt to common activities. If you saw someone walking down the sidewalk talking to the air 10 years ago, you would cross the street to avoid an obvious schizophrenic. Now, you just assume that they have a hands-free cellphone: no biggie. Interfaces for wearables will be no different, and we'll evolve social cues to figure out, for instance, when someone is paying attention to you or when they're reading the latest baseball scores on their heads-up display.

In the mean time, though, the development and acceptance of wearables depends a lot on being able to fit into current social conventions, which means that a lot of effort is being spent on making heads-up displays as unobtrusive as possible. This research attacks the other end of the problem, trying to make the user input as unobtrusive as possible too: in this case, you strap an EMG sensor to your arm, and can give your wearable simple inputs by tensing your muscles in certain ways. That's pretty much undetectable while you're standing around on a crowded subway, though the informational content of such input is pretty low compared to a full keyboard. Still, one could imagine situations where it would be plenty: just a "next slide" command is more than enough to meaningfully control your heads-up display while you're giving a talk.

Needless to say, plenty of work needs to be done before this is a practical system for an off-the-shelf wearable, but I'm mainly happy about the general idea of "intimate interfaces": being able to usefully interact with a computer while nearby folks are oblivious to your activities. Working on these sorts of issues, even with imperfect solutions, will hasten the day when there is such a thing as an off-the-shelf wearable.