Google I/O 2017 Full Recap

This week I had the opportunity to attend the Google I/O conference in Mountain View, California. It was an incredibly compelling event as Google shifted their focus as a company from mobile first to AI first. This means that all products will be redefined and enhanced through various forms of AI.

This includes the Google Assistant, which was the star of the show. The deck goes into detail, but it’s incredibly important that we begin thinking about the role that the Google Assistant plays across home, smartphone, wearables, auto and soon AR. With the launch on the iPhone announced at the conference it gives Assistant 200 million voice enabled devices out of the gate.

What is also key to consider is the Google Assistant equivalent of an Alexa Skill, called an Action by Google. Actions can support transactions outside of Amazon as well as not requiring installation. Also, there is a very small number of actions that exist today, but a huge and rapidly growing ecosystem of devices that are Google Assistant enabled.

Here is the full trend recap and analysis:

Section one covers trends tied to connection & cognition:

  • Vision of Ubiquitous Computing
  • Multi-Modal Computing
  • Google Assistant (Actions, Auto, Computer Vision, Wear)
  • Android O
  • Progressive Web Apps
  • Structured Data & Search

Section two covers all facets of immersive computing:

  • Immersive Computing
  • Daydream (Virtual Reality)
  • Social VR
  • WebVR
  • Visual Positioning Services
  • Tango (Augmented Reality) 
  • WebAR

In addition to the attached recap, there is also a 4 minute “light recap” video:

For third party commentary, discussed the role of Google Lens & Computer Vision with AdExchanger here

Follow Tom Edwards @BlackFin360