The Next Big Thing in Tech: Screenless Interfaces

26 Aug, 2013

As the mobile computing revolution evolves into an era of internet-enabled wearable devices, interaction designers and developers are about to run headlong into a sizable design challenge: We’re running out of room.

Technology is quickly moving from handheld touch screens (which presented their own unique interaction design challenges as user interfaces transitioned from PCs to mobile phones) to even smaller, personally integrated form factors e.g. watches, glasses, and clothing. Think about the first time you used your thumbs to type on a smartphone keyboard; now consider how that operation might work on a watch interface. Hint: It won’t.

Smart Watch - Teeny Tiny Keyboard

As screens shrink—or even disappear altogether—what’s an interaction designer to do? The answer may look like science fiction, but the advancement and convergence of two technologies are paving the way for the future of user interface design.

Natural Language Interaction

Companies like Nuance, Apple, and Google are bringing voice-based interactions to the mainstream. Beyond just dictation, Natural Language Processing (NLP) is a means of effectively and intuitively interacting with a computer via a natural conversation.

Social mores about speaking to a machine will adapt

While the experience can be frustrating, the technology is improving rapidly and will pave the way for a less screen-dependent experience. For many, as voice interaction gets smarter, it will become a more efficient and natural means of accessing information.

Social mores about speaking to a machine will adapt alongside the technology, but of course, there will still be times when speaking to a computer is inconvenient—even obnoxious (e.g. in an environment with significant background noise, in a public setting with prying ears, in quiet public spaces like libraries, etc.). This leads to the second innovation that will transform the user interface.

R2D2 holographic projection

Spatial Interactive Displays

When you run out of screen real estate, what’s a designer to do? Project a screen into thin air.

A spatial interactive display is an interface that is projected onto a surface, presented as an augmented reality layer, or projected into space via holographic generator. Cameras and other sensors are also employed to detect a user’s gestural interactions, allowing them to manipulate an onscreen or spatial interface.

Much of this technology exists today. Keyboards can be projected onto a flat surface, augmented reality displays can transpose a layer of data over our field of vision, and devices like the Xbox Kinect and Leap Motion use cameras to see physical gestures in space and translate those movements to onscreen interactions. The University of Washington has also developed a camera-less solution that detects motion via disruption of in-air wifi signals.

Recently, Elon Musk, CEO of Tesla Motors and SpaceX, announced via Twitter that his team of engineers has “figured out how to design rocket parts just w hand movements through the air.”

Moving further into the realm of science fiction, Ray Kurzweil (futurist, inventor, and Google’s Director of Engineering working on artificial intelligence and machine learning) hypothesizes that nanobots could be capable of manipulating a person’s senses and field of vision to present a completely immersive interface visible only to that person. He believes this virtual reality experience could replace most travel.

As gesture detection and interface projection technologies converge and improve, the need for computers with attached physical screens will eventually diminish.

Harmonic Convergence: Piecing It Together

The most powerful and intuitive screenless interfaces will emerge when computers are able to seamlessly combine natural language processing with a spatial and gestural UI.

The Iron Man films (referenced by Musk in his tweets about the holograph generator) do an excellent job of illustrating how natural language interaction with a virtual assistant (J.A.R.V.I.S.) combined with 3D projection, gestural control, and an augmented reality heads up display (HUD) work together to create an immersive and natural interactive environment.

I can’t promise we’ll all be flying around in weaponized, armored suits in the next 10 years, but the chances of us interacting with our computers via voice and gesture control without a physical screen are very good.

Posted in Technology | Comments Off on The Next Big Thing in Tech: Screenless Interfaces

Comments are closed.