Skip to main content

Google now lets you search for things you can’t describe — by starting with a picture

Google now lets you search for things you can’t describe — by starting with a picture

/

Starting with a US beta

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

You like the way that dress looks but you’d rather have it in green. You want those shoes but prefer flats to heels. What if you could have drapes with the same pattern as your favorite notebook? I don’t know how to Google for these things, but Google Search product manager Belinda Zeng showed me real-world examples of each earlier this week, and the answer was always the same: take a picture, then type a single word into Google Lens.

Today, Google is launching a US-only beta of the Google Lens Multisearch feature it teased last September at its Search On event, and while I’ve only seen a rough demo so far, you shouldn’t have to wait long to try it for yourself: it’s rolling out in the Google app on iOS and Android.

Take a screenshot or picture of a dress, then tap, type “green,” and search for a similar one in a different color.
Take a screenshot or picture of a dress, then tap, type “green,” and search for a similar one in a different color.
GIF: Google

While it’s mostly aimed at shopping to start — it was one of the most common requests — Google’s Zeng and the company’s search director Lou Wang suggest it could do a lot more than that. “You could imagine you have something broken in front of you, don’t have the words to describe it, but you want to fix it... you can just type ‘how to fix,’” says Wang.

In fact, it might already work with some broken bicycles, Zeng adds. She says she also learned about styling nails by screenshotting pictures of beautiful nails on Instagram, then typing the keyword “tutorial” to get the kind of video results that weren’t automatically coming up on social media. You may also be able to take a picture of, say, a rosemary plant, and get instructions on how to care for it.

Google’s Belinda Zeng showed me a live demo where she found drapes to match a leafy notebook.
Google’s Belinda Zeng showed me a live demo where she found drapes to match a leafy notebook.
GIF by Sean Hollister / The Verge

“We want to help people understand questions naturally,” says Wang, explaining how multisearch will expand to more videos, images in general, and even the kinds of answers you might find in a traditional Google text search.

It sounds like the intent is to put everyone on even footing, too: rather than partnering with specific shops or even limiting video results to Google-owned YouTube, Wang says it’ll surface results from “any platform we’re able to index from the open web.”

When Zeng took a picture of the wall behind her, Google came up with ties that had a similar pattern.
When Zeng took a picture of the wall behind her, Google came up with ties that had a similar pattern.
Screenshot by Sean Hollister / The Verge

But it won’t work with everything — like your voice assistant doesn’t work with everything — because there are infinite possible requests and Google’s still figuring out intent. Should the system pay more attention to the picture or your text search if they seem to contradict? Good question. For now, you do have one additional bit of control: if you’d rather match a pattern, like the leafy notebook, get up close to it so that Lens can’t see it’s a notebook. Because remember, Google Lens is trying to recognize your image: if it thinks you want more notebooks, you might have to tell it that you actually don’t.

Google is hoping AI models can drive a new era of search, and there are big open questions about whether context — and not just text — can take it there. This experiment seems limited enough (it doesn’t even use its latest MUM AI models) that it probably won’t give us the answer. But it does seem like a neat trick that could go fascinating places if it became a core Google Search feature.