As more and more tech companies start working on VR headsets and wearable technology, the need for alternative methods of control becomes more and more obvious.
While facial gestures and eye movements can be considered valid alternates to touch based controls and physical keyboards, there still is quite some time before they can get advanced enough to become mainstream.
However, Microsoft is taking a different approach to this issues through certain projects aimed at the development of hand tracking, haptic feedback and gesture input.
These projects will work towards building an ecosystem of gestures control, which when combined with voice commands and traditional physical input methods will allow users to control all sorts of devices.
The initial phase of this system consists of tracking all the possible configurations that a user might create with his/her hands.
This is being done via Handpose, a research project which is using the Kinect sensor to track all types of user hand movements in real-time and converting them into virtual movements.
These digital representations of the human hand movements will then be used to control various virtual buttons and dials which correspond directly with different systems of electronic devices.
To make sure that the user receives constant feedback of these systems through the sense of touch, the researchers are also developing various physical buttons which would serve as placebos for the virtual controls.
This would allow the developers to use a re-targeting system to integrate multiple, context-sensitive commands to be laid over the top of these physical buttons in the virtual world.
On the user end, this would mean that a limited set of virtual objects on a small real-world panel would be enough to control a complex array of virtual knobs and sliders.
While the integration of these physical buttons doesn’t really affect the operation of the system in any way, it still is important to expedite the learning curve by making the virtual interfaces feel more real.
The third part would be actually integrating these gestures with existing apps and programs. The research on this part of the system is being done under the name, Project Prague.
The researchers working on this project are using a machine learning algorithm to teach the system to recognize different gestures, which when combined with compatible apps would execute specific actions.
Examples of such actions would be, turn of a key could lock a computer, or pretending to hang up a phone might end a Skype call.
As you can understand, by combining all these aspects of the system together, Microsoft would be able to create a whole new ecosystem of machine interactions, which would truly bring technology into the three dimensional world.