Google Lens will launch within Assistant on all Pixel phones in the coming weeks

Image: Google

Google is bringing its artificial intelligence-powered Lens tool to all Pixel and Pixel 2 phones in the coming weeks as part of an update to Google Assistant, the company announced today in a blog post. Lens, which was first unveiled back in May at the company’s I/O developer conference, is an computer vision system that lets you point your Pixel or Pixel 2 camera at an object and get information about it in real time, as the AI-powered algorithm is capable of recognizing real-world items.

Lens was first made available within Google Photos last month as part of the Pixel 2 launch, and now Google says Lens will soon arrive as a built-in feature of Google Assistant starting in the US, UK, Australia, Canada, India, and Singapore “in the coming weeks,” the blog post reads.

Right now, Lens won’t be able to identify everything around you. Google says it’s best used on simple items to start. It can identify text, for when you want to save information from business cards, save a URL from a poster or flier, call a phone number written down on paper, or open Google Maps with directions to a written address. Lens can also identify notable landmarks and can pull up information websites and media for art, books, and movies by pointing the camera at film posters, book covers, and museum installations.

Lens also works as a more efficient barcode and QR code scanner. Down the line, Google says Lens will only improve as it learns more about our surroundings and becomes more adept at identifying people, objects, and any manner of other things in the real world.


It’s been neat on google photos thus far. Yesterday my wife texted me a picture of a business card someone had dropped off at the house, and lens identified the phone numbers on it and offered to call them for me. It was pretty simple.

I’m curious as to where the whole AI industry is going. Will these services, features, and products exist to solve a legitimate problem, or are companies just inventing "problems" to substantiate their products, features, and services?

I don’t use Assistant, Now on Tap/Screen Search, Feed/Google Now, Lens, etc., nor can I imagine actively seeking out a self-driving car when the time comes.

Perhaps I’m just old fashioned, but the tangible way in which these services would benefit my life aren’t readily apparent. Maybe it’s because I live in Japan and the more cutting-edge features aren’t available here?

Still, the end result of the AI movement is to…What exactly? Make people’s lives so convenient that they can devote extra hours to intellectual thought, discussion, analysis, and introspection? That would be awesome, but it seems more likely the time saved will be used to further an existing addiction to social media.

I think AI is going to be a very broad concept and thing, one you can’t pin point to certain necessities or unnecessities. AI is going to be the user Interface of the future. Think about, for example, what Apple does with the iPhone Xs front camera for portrait selfies and what photo sharing apps do with filters. At some point your phone will know what style of photo do you like when you take a selfie and will automarically post-process your photo like a professional photographer would through machine learning. And when you say "make it a little warmer" it will do so. Possibilities are endless. Machines at some point will know what we want as we speak to them and they will understand even finest nuances in sense.

Yes but the issue – at least for me – is that your example still seems too small in scale. Make no mistake, it’s a great example, just as LamentRedHector’s is. But I’m asking about the bigger picture. Are we going to have real androids, like those in sci-fi movies? Will we have embedded tech that can produce altered or virtual reality without any wires or physical devices?

The funny thing is, reading your comment made me think of how AI will put more and more people out of work. What need will there be for professional photo editing if an AI can do it instantly and presumably at no cost? Would professional photography even matter anymore if some random nobody can take a crap picture and have it magically – automatically – edited to be of award-winning perfection?

Taking the idea one step further, why would you even need a traditional camera at all? If the AI is advanced enough, it could create a fully-realized image based solely on existing information. Say it already knew what you looked like, and knew you were at location X staring at scene Y, it could artificially create a perfect image.

Heck, why even be there at all? You could literally say "take a selfie of me standing in front of Monument Valley" and have a believable, genuine image all while sitting in your chair at home.

your example still seems too small in scale.

Isn’t being small in scale the bare definition of an example?

reading your comment made me think of how AI will put more and more people out of work.

That’s not just my comment, but in fact the long term promise (risk?) of AI. Growth of productivity by technical progress. Meaning more yield at a given state of labor or a given yield through less (human) labor for national economies. The question will be how governments deal with that. In theory we can all profit from that, either through higher welfare at given labor or through achieving the same welfare at less labor.

Are you arguing against AI because you don’t want to get things done faster and more easily? This is the most ridiculous comment I’ve read in a while.

I think you’re worried a bit too much.

It will just make life easier. How we use the benefits from it are the same as a car, a plane, a hammer, atomic energy…

Just a tool.

Some people will do great things with and some people will tweet pictures of their lunch with it. Again.

I can’t wait to use it for determining "hot dog"/"not a hot dog".

But is a hotdog a sandwich?

No it’s a fucking hot dog

I agree, but there was a huge debate about this on one of the Falcons blogs. It became such a big topic that one of the writers on the site actually asked a player on the team.

It’s either a hot dog or not hot dog.


At the moment though, you have to actually take a picture and then it searches for what it is.

Previously at Google I/0, you could hold your camera out and it showed you what it is before actually snapping.

They would need to integrate it into the camera, which they haven’t gotten there yet.

Has Google said anything about Lens coming to non-Pixel devices?

It might have something to do with their image processing chip.

Since it was live, but in beta, on the Pixel 2 launch, and the chip was not active, I doubt that is a requirement. I think the chip might enhance it, but we will have to see. I’m on the Nov. security update on my 2 XL, so no 8.1 Beta to find out.

Hmm, not specifically but it says it’s coming to Google Assistant.

"built-in feature of Google Assistant" I take that as any device w Google Assistant will have Lens included.

View All Comments
Back to top ↑