Saturday , December 16 2017

Google Lens: A new way to use your phone camera

The first jaw dropping announcement at Google I/O 2017 this morning has been the announcement of Google Lens, utilising Google’s continually growing understanding of vision in photos — labelled as vision based computing capabilities by Sundar Pichai.

The initial implementation will be introduced in Google Photos and Assistant, very simply put – if you run into something and want to know what it is, Google Assistant will tell you what you’re looking at through some pretty amazing technology. For example, if you’re looking at a Wi-Fi router, point the Assistant at it and the details of the router will be captured allowing your phone to automatically connect to the Wi-Fi network simply by pointing Google Lens at the password.

Other examples given were pointing the Assistant at a flower to identify it or point your phone at a restaurant across the street. This will use the Google Street View images to recognise the location and show you the Google Maps details of that location, including reviews.

The other function that got the attention of the audience is the ability to remove obstructions from photos and essentially turn an OK photo into a good photo like this…

There’s certainly more to come as the functionality becomes available to users on their devices.

How do you think this capability improvement in Google’s photo recognition will affect you?

Phil Tann   Journalist

Phil is an Android enthusiast who spends most of his time reading up on U.S. Android news so he can get the low down on what could possibly hit Australian shores. Coming from a background in IT & T sales, he’s in the perfect position to give an educated view on hardware and software.

Join the Ausdroid Conversation

2 Comments on "Google Lens: A new way to use your phone camera"

Sort by:   newest | oldest | most voted

What I’m seeing here is the amalgamation of features from other apps into a single very fictional app. What took them so long! Seem Samsung and Google are both working towards similar goals but along individual paths. Can only be a good thing in the long run imho.


They are really fleshing out the feature set of Google assistant that has been lurking around in the camera app and google goggles.
While the functionality has been there, it feels like a unification due to the Samsungs Bixby.
The removing the obstructions, and content filling etc is impressive… the computational work that they are doing in the camera/app is impressive, like the photo stacking of underexposed images in the pixel for low noise images in low light.
What would have taken a long alot of work in photoshop/post production google are building right in to a “single” capture from the app.

Check Also

CloudHQ gives you an easy way to sync your various cloud services, including Google Drive and Microsoft OneDrive

Chances are if you’re reading Ausdroid you’ve got a Google account (or more than one) …