Keyboards and mice are so yesterday. A panel at Engadget’s Gestures/Alternative Interfaces talk reaches for new interfaces, and rethinks the ways we interact with our machines.
Admit it: You want to wave your hands and have technology obey your every gesture, just like in Johnny Mnemonic (I mean, Minority Report). You’re not the only one. The speakers of the Gestures/Alternate Interfaces panel at Engadget Expand, the two-day technology expo held on November 9-10, 2013, suggested that controlling the computers and devices in our lives will soon be as real as in the world of Minority Report (I mean, Lawnmower Man).
It’s not just hand-waving (see what I did there?) on the part of the speakers. Avinash Dabir, director of developer relations at Leap Motion; Paul Gallagher, VP of strategic market development at Pelican Imaging; and Shoneel Kolhatkar, senior director of product planning at Samsung Telecommunications America believe that the days of the trackpad and mouse are numbered. And they also are actively working on mouse extermination.
There’s a reason why gesture interfaces are a sought-after technology, besides the thrill of acting like Tom Cruise and Keenu Reeves. In one example the panelists gave, surgeons are required to scrub their arms sterile before performing operations. But what if she needs to reference her copy of Neurosurgery for Dummies? Today, the doctor has to rescrub, a 10-minute procedure. With motion control, the panelists point out, opening a book or a computer won’t break sterility.
Gallagher was quick to say that we won’t be tossing our mice and trackpads into the rubbish bin next to our Walkmans and Blackberries (see what I did there?). Because using a full-on range of arm motion to use your computer would give users “the arms of a linebacker,” he expects gesture control only to be “a supplement to the current interface.”
This is important because, as of now, the trackpad and mouse are king when it comes to fine control, even in a post-WIMP world. Dabir said that the windows we use on a daily basis “[are]n’t necessarily designed for motion and gesture control.”
Until innovators redesign entire operating systems, the solution is to work with the GUI we already know and love, but to control it with hand motions. So instead of closing a window—which most people do by clicking on a tiny icon within a large screen—there could be a way to enlarge that part of the window to make the tiny icon more easily clickable. Dabir suggested a motion that represents “pulling a window closer to you.”
Currently, there are no standards for motion control, no one gesture that means “Open My Little Pony fan fiction.” But Kolhatkar said, “These gestures really need to be simple, efficient, and more immersive.”
Here, the speakers agreed that we need a language of gestures, one that follows the natural motions of the human body, in order to be universally adopted.
As for the future of motion control, Gallagher noted an “interesting patent” from Honda. In it, Honda describes a technology that combines gesture control and haptic technology, in which the future you will “press” buttons in mid-air. Even though you’ll touch only air, Gallagher explained, you’ll “get tactile feedback” that will feel as real to you as a keyboard. (Plus, he added: Without an actual dashboard, Honda will “save a ton of money in wire harnessing.”)
Dabir went even further than that: He wants gesture control to have a more predictive quality. “If I point to [a window], why can’t it come closer to me and interpret my intentions?”
These concepts, even the universal use of gesture control, aren’t quite ready for action. As Ben Gilbert, senior editor of Engadget and moderator of the panel said, “I look forward to seeing what’s coming up in the next 20 years.”
Gallagher then added, “Hopefully sooner.”