Wearable technology, specifically head mounted displays have been a part of science fiction canon for a very long time. Fans of the 80’s anime series Dragon Ball were accustomed to seeing characters with their own version of “Google Glass” interfaces. This preexisting association can be both a positive and a negative when it comes to the potential mass adoption of Google Glass.
The explosion of fitness related wearable technology with the Fit Bit, Nike Fuel Band & the recently launched Jawbone fitness band have led to a rise in mass appeal for wearable technology. The trend has been if the wearable technology provides relevance & utility as a natural extension of our daily lives we are willing to put our time & dollars towards supporting this type of product. For Google Glass, the goal is to further integrate the real world into the Google ecosystem thus creating a natural extension into your daily life, even if you look a bit like Geordi La Forge from Star Trek: The Next Generation.
Google Glass has been the subject of a lot of hype over the past few months. From influencers wearing them at SXSW to recent sightings throughout NYC. What started as a project from Google X Labs is now on the verge of becoming a mainstream device. Whether this will go from Uber nerd category to mainstream essential is yet to be seen, but this will fundamentally impact the intersection of physical & digital moving forward.
Here I am testing Google Glass
Google Glass, What are the specs and what should I expect from the UX?
The recent release of the Google Glass tech specs outline Google’s commitment to bring the product to market and attempt to redefine how we interact with the physical world. The specs include:
- a high resolution display which is equivalent to a 25 inch HD TV from 8 feet away
- 5 MP camera and 720P video
- Bone condution audio transducer
- Wi-fi & Bluetooth enabled
- 12 GB of useable memory synced with Google cloud storage (16 GB total)
- 1 full day of typical use
Outside of the tech specs, I was really interested in diving into the Google Mirror developer API’s. This is where you can really begin to see how Google plans on allowing the developer ecosystem to support the product & experiences moving forward.
One of the core elements of the user experience is tied to the concept of Timeline Cards. These cards display the top level content that users will see. There are essentially two levels of navigation, with a top level primary and a sub-timeline for easy organization. Timeline cards support text, rich HTML, images or video content. From a brand perspective, understanding the relationship between relevant content & how information is presented & consumed via Timeline cards will be a key area to focus on as launch approaches.
Similar to how Facebook allows the usage of “action objects” to further drive content acceleration and discoverability through the social graph, the Google Mirror API allows the addition of action based interactivity into the app experience. For now commands such as “read aloud”, “reply by voice” and “navigate to” are inherent to the navigation, but this can extend “discover” or other action verbs. More importantly, it will be interesting to track how user actions are then reported back, or ultimately mapped to contextual or location based search. It is easy to see how actions could then be turned into opportunities to share both within the construct of Google & possibly overlays to the physical world via augmented reality tagging or proximity based recommendations.
Subscriptions seem to be a key element to the Google Glass experience. Both from an engagement & tracking standpoint. Subscriptions tell you when users choose specific menu items or when they share to a contact. Once an action is taken, it will be possible to take a specific action, such as share a photo. This will allow branded experiences to see what is truly engaging to the end user.
Location is going to be a key element of Google Glass. If the user opt’s-in and grants access it is possible to use the Google Mirror API to observe the user’s location in timeline items, request their last known location directly, and subscribe to periodic location updates. You can also deliver pre-rendered map images in timeline cards by giving the Mirror API the coordinates to draw. Basically, location is the key attribute to connect the user to their environment that can then be overlaid map data or even with augmented reality interfaces.
The biggest brand opportunities will be tied to mapping users locations with digital overlays to take real-world actions. This is already coming to life through Google’s augmented reality massively multiplayer online game for Android Ingress. Ingress seems to be designed with Google Glass in mind. I will be going into deeper detail around the impact of Ingress and the potential for brands in a future post but brands such as Zipcar & Jamba Juice are already testing the impact that this type of engagement can provide.
What is the potential for Google Glass over the next 3-5 years?
Interconnectivity – The intersection of technology and utility is going to be a key area of focus over the next 5 years. Interconnection between smart grid technology in our homes that intersect with mobile devices such as Google Glass will continue to gain traction. When it comes to wearable technology, the overlay of digital into our everyday lives via products such as Google Glass are just the tip of the iceberg on a new landscape of interaction both physically and socially. Changing the view of real world with digital overlays will continue to develop into a new form of communication and interaction.
Contextual Data – The trend digitally is a movement from mass social interaction towards contextual networks. This same concept will push through wearable technology. You look at the rise of the Nike Fuel band and the gamification and sharability of personal information. This trend will continue to expand beyond fitness into other facets of our lives. Data tied to fitness, work habits, leisure etc… will all begin to create different sets of data that can then be visualized, gamified and used to help us lead more efficient, effective lives.
This also maps to Google’s larger strategy tied to contextual & personalized search. If you watch what is happening from a search standpoint, one of the bigger trends is the move towards personalized & socially enabled search where results will differ by individual, and social weighting of content will be a key driver to determine what search results you see. This coincides with Google Glass, as the intersection of location, search & social are evident based on how the UX is being defined.
Content anywhere – How we consume content has changed significantly over the last decade. Content ubiquity will become accelerated with Google Glass and similar products that provide HD display’s and voice activated controls allowing for access to streaming content on demand. This is just the beginning as paper thin displays and wearable technology continues to evolve. What was once thought to be science fiction is quickly becoming reality.
Follow Tom Edwards @BlackFin360