Google Lens will give the company greater insight into our daily lives than ever before.
谷歌镜头(Google Lens)将把谷歌带入用户生活的方方面面,此举前所未有。

It uses machine-learning to identify real-world objects through your phone’s camera, but that’s just the start of the story. It can also analyse everything it sees, understand the context, work out where you are, and figure out what you want to do.

As shown by Google, Lens can use optical character recognition to take the username and password from a Wi-Fi router, and instantly connect your phone to that network. 

It can also bring up restaurant reviews and details, using GPS location data to instantly work out which branch you’re considering going to. All you need to do is point your camera in the right direction.

Google says Lens is coming to both Photos and Assistant.

The former, incidentally, will use machine-learning to analyse your pictures more thoroughly than ever. As well as editing them and recognising the people in them, it will prompt you to send the right photos to the right people, and invite your contacts to send pictures of you, to you. 

Assistant, meanwhile, has just been rolled out to Apple’s App Store. Photos is already available on iOS.

The company is quietly transforming your camera into a search engine.

Google doesn’t have the best of reputations when it comes to the privacy of its users, and the thought of the company not only being able to see everything you see, but to understand it too, won't sit comfortably with everyone.

Google's vision of the future looks incredible, but the fear is that all of that convenience will come at a huge price.