The first jaw dropping announcement at Google I/O 2017 this morning has been the announcement of Google Lens, utilising Google’s continually growing understanding of vision in photos — labelled as vision based computing capabilities by Sundar Pichai.

The initial implementation will be introduced in Google Photos and Assistant, very simply put – if you run into something and want to know what it is, Google Assistant will tell you what you’re looking at through some pretty amazing technology. For example, if you’re looking at a Wi-Fi router, point the Assistant at it and the details of the router will be captured allowing your phone to automatically connect to the Wi-Fi network simply by pointing Google Lens at the password.

Other examples given were pointing the Assistant at a flower to identify it or point your phone at a restaurant across the street. This will use the Google Street View images to recognise the location and show you the Google Maps details of that location, including reviews.

The other function that got the attention of the audience is the ability to remove obstructions from photos and essentially turn an OK photo into a good photo like this…

There’s certainly more to come as the functionality becomes available to users on their devices.

How do you think this capability improvement in Google’s photo recognition will affect you?

2 Comments
newest
oldest
Inline Feedbacks
View all comments
LoneWolf Tri

What I’m seeing here is the amalgamation of features from other apps into a single very fictional app. What took them so long! Seem Samsung and Google are both working towards similar goals but along individual paths. Can only be a good thing in the long run imho.

Russell Fletcher

They are really fleshing out the feature set of Google assistant that has been lurking around in the camera app and google goggles. While the functionality has been there, it feels like a unification due to the Samsungs Bixby. The removing the obstructions, and content filling etc is impressive… the computational work that they are doing in the camera/app is impressive, like the photo stacking of underexposed images in the pixel for low noise images in low light. What would have taken a long alot of work in photoshop/post production google are building right in to a “single” capture from… Read more »