Project Glass is not enough. Google patents smart glove to make your Minority Report VR dreams come true
Google’s augmented reality glasses are all fun and good, but they are only just a start. The information beamed at you via head mounted display could be helpful, but how do you respond, and operate the darn thing?
Enter Google’s Smart Glove, described in a recently issued patent called “Seeing with your hand”.
The Google Glove is filled choke full with electronics. These include cameras on the fingertips, compass, gyroscopes, accelerometers and other motion detectors on the fingers, CPU, a bunch of RAM and storage in the palm of your hand, and (wireless) communication chips on the back. Maybe even a small battery band around your wrist.
After cramming all the necessary detectors and processing electronics inside, the fun with the software begins.
For starters – how about your gloved finger with the high res camera on the tip, acting as a microscope on steroids? Want to explore that chip inside your device in more detail? Just hover your finger over it. Google Glove will record a stream of multiple images of it, stabilize them and combine into one enlarged view. Enhanced with additional information, such as measurements of the chip or a circuit diagram of one or more components.
Use one or two more fingers with cameras on them, and you can generate enlarged and enhanced views of really small 3D objects. Replace some cameras with ultrasound, or other non-visual high penetration transmitters and receivers – and you start seeing inside objects. Add some infrared cameras into the mix, and you have night vision.
You get the drift…
But static images are only the beginning. Add some software magic to combine and interpret captured visual images with the motion sensor data, connect Google Glove to the Glass, and the Minority Report like virtual reality emerges.
By knowing in which direction, and how far your hand and fingers moved – Google Glove can interpret any gesture you make, and send that info to the computer that powers heads up display. Which knows what is being projected onto your iris and can actively react to the gesture commands.
Reading a book right now? Swipe to turn the next page.
Can’t see what’s on that billboard in the distance? Pinch and the Glass camera zooms in for better view.
Oh, what’s that – a projected menu in the café you are in? Just point to the snacks and drinks your want, and press the order button.
Need to write an e-mail? Clench your fist to call up the virtual keyboard and start typing on it
Know American Sign Language? Start signing and that e-mail may be finished even faster.
Whew… Got a bit carried away by all the possibilities that Google Glove may open up.
The bad news – “Seeing with your hand” is just a patent for now, so such Glove and Glass integration is still years away.
But the fact that Glove is already a granted patent, and not a patent application, is also a good news. Project Glass seemed like a very distant future only few months ago, and now we have people with prototype devices walking around, broadcasting and jumping from planes.
I am pretty sure that Google X Labs already have a few such Gloves lying around, and Sergey Brin may be playing with one right now.
Give it a year or two, and the rest of us may at least see it on Google I/O stage.