He considers two-dimensional interaction just the beginning."You can stack layers [of ZeroTouch] together to get depth sensing," he said. The system could then sense objects in a 3D space, as well as allow users to hover over objects. Typically, hovering isn't available with touch systems because a finger would occlude what it's hovering over, he said.
If ZeroTouch becomes the new technology to create 3D objects, the Snowglobe project could provide a way to view and interact with them.
Snowglobe is a large acrylic ball that has an image projected on its inner walls from a hole in the bottom. Two Microsoft Kinect sensors are pointed at users, and when they approach and move around the ball, the object on the inside follows them. When users stretch out their hands, their gestures can control the orientation and size of the object inside the globe. The image is cast by a 3D projector so wearing 3D glasses adds another dimension to the experience.
John Bolton, with the Human Media Lab at Queens University, came up with the idea, on show at CHI 2011, and had been working on it for two years. "If we nest an object inside, we can present all 360 degrees of that object if somebody walks around the display," he explained. "So opposed to just sitting there with a mouse you can walk around and you're presented with the correct view as your position changes." As is true for many of the projects at CHI, there are no immediate plans for commercialization.
Japanese mobile phone operator NTT DoCoMo has shown a project that let users control a music player by moving their eyes, first at the Ceatec 2009 conference near Tokyo and the next year at Mobile World Congress in Barcelona. The prototype includes earbuds that measure the changes in electrical state when a user's eye moves. Those impulses can then be translated into actions like skipping to the next track or turning up the volume. Although it was a crowd-pleaser, NTT DoCoMo says it has no plans to comemrcialize the technology.
Germany's Hasso Plattner Institute took a different approach to gesture interaction. Led by Patrick Baudisch, the Berlin-based group has developed what it calls imaginary interfaces that allow users to interact with mobile devices when they're not in front of them. Imagine hearing your phone ring in your pocket, but instead of taking it out, you hold up your palm and swipe your finger across it to ignore the call.
The prototype system won't be portable anytime soon. It uses depth-sensing cameras mounted above the users, or sometimes on the users' shoulder, to locate where their fingers are and what they're touching.
Baudisch credited Apple with replacing styli with touchscreens, but he and his team wanted to take it one step further. "Why don't we leave this [stylus] out and retrieve no devices at all for these tiny interactions such as turning off an alarm or picking up a phone call or sending to voicebox?" he suggested during CHI 2011. "People will interact directly on the palm of their hand." The system could work because users can remember about 70 percent to 80 percent of their 20 home screen icons and where they're located, he said.
Just as there is an acclimation period when switching from a mobile device with a keyboard to one with only a touchscreen, Baudisch imagined that there would be a similar adjustment to using a device users can't see.