Google has officially outed their new project which has been dubbed Project Glass. This project is designed to illustrate to the world how simple wearable computing can influence our daily lives in ways that we had never thought of before. This project was originally rumored to be delivered as a pair of glasses which would then be plugged into your phone and allow you to see and interact with your world. Below we have a video of Google’s Project Glass in action with a demonstration video.

Watching the video, you can see that this technology project is a conceptual one and essentially allows users to integrate already existing Google services into their visual daily life. In a normal setting, many if not all of these services would require you, as a user, to take out your phone and interact with the touchscreen in order to accomplish certain tasks like checking the weather and videochatting.

In addition to that, because of Google’s immense location and visual maps data, these ‘goggles’ could tap into the Google Goggles API and allow your glasses to properly identify where you are and what is around you. We did, though, notice that some things in this video were left a bit untouched. Meaning, the video did not address how the user accomplished certain tasks where multiple menu options were available. When the guy in the video checks into the food truck he doesn’t use any voice commands.

A Model Photo with Project Glass. Google is going for a very sleek and futuristic look with Project Glass. Credit:Google

We do believe that Google may have avoided explaining certain intricacies of this user interface purely for the sake of showing a seamless experience, but we believe that this Project Glass will actually require quite a bit more user vocal input than many are likely willing to accept. This is primarily because they did not actually show how the user used Project glass which could involve facial gestures that we are not aware of. If you watch the entire video, you’ll notice that no hand gestures were used at all, which we found a bit disappointing since some things certainly would be more effective with gesticulation, especially when pointing to or selecting certain items in an augmented reality type of interface like, say, yelp.