Google Lens Is A Peek Into The Future Of Computing

Squint and you can see how Google plans to bypass the Search box–and the screen entirely.

Just minutes into Google I/O–the company’s biggest event of the year–CEO Sundar Pichai announced what could be the future of Google as you know it: Google Lens.

Google Lens is an AI-powered interface that’s coming to Google Photos and Google Assistant. It’s “a set of vision-based computing abilities that can understand what you’re looking at and help you take action based upon that information,” as Pichai put it during his keynote today.

What does that mean in practice? Using computer image recognition–which Pichai reminded us is currently better than that of humans–Google Lens can recognize what’s in your camera’s view, and actually do something meaningful with that information, rather than just tagging your friends–which is how Facebook uses image recognition.

Image 4

Pichai showed off three examples. In one, a camera aimed at a flower identified that flower, in what appeared to be a Google reverse image search in real time. In the second, a camera aimed at a Wi-Fi router’s SKU–a long list of numbers and a barcode which would take time to type–automatically snagged the login information and then automatically connected to the internet. And in the third, a camera aimed around a street full of shops and cafes pulled up reviews of each restaurant, placing an interface element directly over each facade in your field of view.
Later in the keynote, another presenter came on stage to show how, with a tap inside Google Assistant, Google Lens could translate a Japanese menu and pull up an image of the dish–in what looks like a riff on the technology Google acquired with Word Lens.

Image 5
Alone, each of these ideas is a bit of a novelty. But all built into one platform–whether that’s a smartphone or an AR headset in the future–you can see how Google is imagining breaking free of the Search box to be even more integrated with our lives, living intimately in front of our retinas.
“We are beginning to understand video and images,” says Pichai, casually. “All of Google was built because we started to understand webpages. So the fact that we can understand images and videos has profound impact on our core vision.”

Source:; 17 May 2017

Leave a Reply

Your email address will not be published. Required fields are marked *