I hope Google’s Tango device (portable Kinect) will arrive soon. It has the computer vision Movidius chip:
>“New applications include post-capture refocusing, high quality zoom, augmented reality simulation, gaze or gesture-based user interfaces”.
If I were to use gestures, I would want something that accurately scans all the motion that I’m doing.
(It looks like it even records pupil movement).
Plus, if it’s part of a future phone, then you don’t have to buy an external device like the Myo armband, Fin (thumb ring), and Ring (a ring) for gestures. (E.g. you can buy an external eye tracker for $99, but if smartphone, tablet, notebook, and laptop manufacturers modify their already built-in camera sensors in the near future, you can get eye-tracking for $5 bucks. The same example applies to buying a Kinect, as opposed to having the Kinect in your phone already.)
This KickStarter project, VMX Project: Computer Vision for Everyone, should have been backed.
On the website for the Ring, you draw an envelope to access your mail application. With a computer vision system, what if you could just silently mouth the words “Open Mail” after looking at a mail icon somewhere. The system would learn to recognize the minute movements of your eye and mouth in the procedure.
But that’s just one example. With computer vision, you can make the same gestures as the other devices, but it doesn’t have to be a particular finger or arm.
The more you do it, and the more you teach it, the less rigid you would have to be. You want your gesture system to not only see everything, but to be able to learn, and keep getting better the more you use it.