Pointing to the Future: Google's Smart Glove
google smartglove

There has been some talk about the Google 'SmartGlove' in the past week, so I read through the patent, 8,248,364 Seeing with your hand to get a clearer idea about it. The patent outlines a system that could be used to interact with a user interface. The device outlined is a wearable computer, most likely in the form of a glove.

“Humans naturally use their hands and fingers as a means of gathering information … In addition to gathering information, humans also naturally use their hands and fingers to convey information.”.

So say the background paragraphs of the patent. I agree, and this made me rethink my previous thoughts of a BCI (Brain Computer Interface) coupled to a headset. We are physical beings and using our hands is natural. I mean sure, doing hand gestures in public would look a bit strange, but no stranger than talking on a Bluetooth headset did a few years ago.

The patent contains a lot of may do's and may have but not limited to's. I've attempted to decipher this into plain language and give my insight into what the device will do. Bear in mind, I am not a lawyer, I'm an interactive developer and while I have a background in engineering, it was a few years ago and my experience is not this high tech.

This sounds like a pretty serious piece of gear. The camera appears to be the devices primary interface with the world. It may be able to 'see' parts of non visible light spectrum, the patent specifies infra-red, ultra-violet and x-ray. This will be handy in the dark or on the surface of the sun.

The device will have image stabilisation., the patent covers mechanical and processor based stabilisation. Useful since the camera looks like it will be located on your finger tip. Have you ever tried holding your fingers dead still? It's pretty much impossible.

It looks like it will have a significant zoom, able to “'see' an environment that includes small details that may not be visible to a user “. With the ability to measure depth of field, as well as zooming it could provide information such as distance to an object you are pointing at, and how tall it is.

The device with have a “compass or other directional sensor”, I'm guessing it will have GPS.

The device will be able to output to a “wearable display that is wearable by a user and a remote display that is remote from the wearable device.” This could be a display on the glove, but common sense says this is aimed at a head set, perhaps something like Google Project Glass. It also looks like you will to able to share to your friends devices or other remote device, such as a TV.

Noone wants to play alone, so it needs to be connected, it turns out the device will be capable of “communicating with one or more servers, hosts, or remote devices”. So we're looking at some sort of internet connection, Wi-Fi, 3G or 4G/LTE and I would say NFC.

This all sounds like a pretty cool gadget, but we haven't touched on what I think is the most exciting part yet, interaction. The device will detect motion by comparing image streams. It will be able to detect multiple gestures (“predetermined motions”) by comparing motion correlations. I'm guessing Google have purchased this technology. Comparing images to detect motion doesn't seem the most obvious solution, I am thinking of capacitive sensing (think Theremin), but I guess it wont as prone interference.

Now I'm getting excited, wearable display, augmented reality, motion detection, gesture recognition and connectivity. It turns out I didn't have to use my imagination, because the patent outlines some of the intended uses of “detection of particular predetermined motions”. These include:

• “entering one or more characters in an application displayed by the display”

• “zooming in or out on the display or on an application running on the display”

• “page scrolling and panning, such as scrolling up or down or panning left or right on the display or on an application running on the display”

• “moving an indicator on the display and selecting an object displayed by the display”

We are talking about data entry, perhaps for an email, sms, URL or phone number. The another main point is navigation, being able to zoom, pan, scroll and move a cursor. These gestures will be a combination of manufacturer and user defined and learnt through usage, that's right, the device will learn as you use it.

So what do we have, sure the camera can be used for camera type activities, recording images and video or using it as a microscope. Sure you can play Spy vs Spy, looking around corners, zooming in on subjects from afar and receiving directions from base.

I think this selling it short. I see the main function of the camera is to provide input for motion and gesture detection. The device will take the screen out of touch screen, and the touch. We will be able to interact with an interface through hand gestures alone. Now imagine that. Instead of having to carry your smart phone everywhere, you wear a pair of glasses or sunglasses, which augment your real world with online information, from restaurant reviews, weather, geo tags and navigation to reading your messages and keeping in touch with friends and surfing the web with some hand gestures and movements of your fingers.

How it will be packaged is anyone’s guess at the moment. We may see tech like this http://mashable.com/2012/08/11/smart-surgeon-gloves/ used and some hardcore miniaturisation, I don't want to have a box in my pocket to power this. And if this is going to take off, it looks like Google have some work to do with Project Glass' aesthetics, maybe they will license it to Oakley or Ray-Ban.

I'm already thinking about solutions to the design challenges, are you?