The first jaw dropping announcement at Google I/O 2017 this morning has been the announcement of Google Lens, utilising Google’s continually growing understanding of vision in photos — labelled as vision based computing capabilities by Sundar Pichai.
The initial implementation will be introduced in Google Photos and Assistant, very simply put – if you run into something and want to know what it is, Google Assistant will tell you what you’re looking at through some pretty amazing technology. For example, if you’re looking at a Wi-Fi router, point the Assistant at it and the details of the router will be captured allowing your phone to automatically connect to the Wi-Fi network simply by pointing Google Lens at the password.
Other examples given were pointing the Assistant at a flower to identify it or point your phone at a restaurant across the street. This will use the Google Street View images to recognise the location and show you the Google Maps details of that location, including reviews.
The other function that got the attention of the audience is the ability to remove obstructions from photos and essentially turn an OK photo into a good photo like this…
There’s certainly more to come as the functionality becomes available to users on their devices.
How do you think this capability improvement in Google’s photo recognition will affect you?