Skip to main content

The amazing feature Google promised and never delivered

The amazing feature Google promised and never delivered

/

A year ago, Sundar Pichai said we’d be able to instantly get rid of annoying objects in photos ‘very soon’

Share this story

It’s easy to become enamored with the demos and spectacle of big tech keynotes. Yesterday’s stunning reveal of Google Duplex is a prime example of that. It’s such a leap forward in what we’ve come to expect from consumer-facing AI that people are struggling to believe it’s real. CEO Sundar Pichai said Google is planning to test Duplex in experimental form beginning this summer.

But sometimes the most impressive things shown onstage fail to become a product people can actually use. Pichai should be well aware of this. At last year’s I/O, he demonstrated a new camera trick that, with a single tap, would let users remove unwanted objects from their photos using Google’s powerful machine learning systems. It drew some of the same “oohs” and “aahs” as yesterday’s Duplex presentation, but for different reasons. We’ve all taken photos that would’ve been perfect if not for X, that one super annoying thing in the frame you failed to notice or couldn’t avoid when snapping the pic.

Sure, you can sink effort into making those distractions disappear with Photoshop, Pixelmator, Snapseed, or other editing apps, but Google’s solution sounded perfect for people who lack the proficiency and / or time required to accomplish that.

“If you take a picture of your daughter at a baseball game and there’s something obstructing it, we can do the hard work, remove the obstruction, and have the picture of what matters to you in front of you,” Pichai said. Memories made more perfect. Who wouldn’t want that?

But despite a promise from Google’s CEO that this incredible feature would be available “very soon,” it still hasn’t happened. Some underlying code in Google Photos has hinted that object removal remains in development, but Google made zero mention of it during the Photos portion of yesterday’s event.

Is this a case where the company’s vision-based machine learning powers finally ran into a wall? Maybe AI isn’t as good at filling in the missing pieces as Google initially expected it to be. Perhaps the results are lackluster compared to a human diligently working with the Clone Stamp tool. Consistency could be the issue; it’s easy to see how AI-based editing decisions could work great with some images and disastrously with others. This is several steps beyond exposure and color adjustments, after all. I’ve reached out to Google PR and Photos product lead David Lieb for an update on where this feature stands.

If you want a glimmer of hope: sometimes Google just takes a long time to ship this stuff. During 2017’s I/O keynote, Pichai also demonstrated a futuristic Google Lens function that lets you point your phone camera at text in the real world, copy it, and paste it into an app on your phone. Yesterday, Google revisited that feature again. It’s coming very soon.