The amazing feature Google promised and never delivered

It’s easy to become enamored with the demos and spectacle of big tech keynotes. Yesterday’s stunning reveal of Google Duplex is a prime example of that. It’s such a leap forward in what we’ve come to expect from consumer-facing AI that people are struggling to believe it’s real. CEO Sundar Pichai said Google is planning to test Duplex in experimental form beginning this summer.

But sometimes the most impressive things shown onstage fail to become a product people can actually use. Pichai should be well aware of this. At last year’s I/O, he demonstrated a new camera trick that, with a single tap, would let users remove unwanted objects from their photos using Google’s powerful machine learning systems. It drew some of the same “oohs” and “aahs” as yesterday’s Duplex presentation, but for different reasons. We’ve all taken photos that would’ve been perfect if not for X, that one super annoying thing in the frame you failed to notice or couldn’t avoid when snapping the pic.

Sure, you can sink effort into making those distractions disappear with Photoshop, Pixelmator, Snapseed, or other editing apps, but Google’s solution sounded perfect for people who lack the proficiency and / or time required to accomplish that.

“If you take a picture of your daughter at a baseball game and there’s something obstructing it, we can do the hard work, remove the obstruction, and have the picture of what matters to you in front of you,” Pichai said. Memories made more perfect. Who wouldn’t want that?

But despite a promise from Google’s CEO that this incredible feature would be available “very soon,” it still hasn’t happened. Some underlying code in Google Photos has hinted that object removal remains in development, but Google made zero mention of it during the Photos portion of yesterday’s event.

Is this a case where the company’s vision-based machine learning powers finally ran into a wall? Maybe AI isn’t as good at filling in the missing pieces as Google initially expected it to be. Perhaps the results are lackluster compared to a human diligently working with the Clone Stamp tool. Consistency could be the issue; it’s easy to see how AI-based editing decisions could work great with some images and disastrously with others. This is several steps beyond exposure and color adjustments, after all. I’ve reached out to Google PR and Photos product lead David Lieb for an update on where this feature stands.

If you want a glimmer of hope: sometimes Google just takes a long time to ship this stuff. During 2017’s I/O keynote, Pichai also demonstrated a futuristic Google Lens function that lets you point your phone camera at text in the real world, copy it, and paste it into an app on your phone. Yesterday, Google revisited that feature again. It’s coming very soon.


So….all of those words written only to conclude that it might take time to come to fruition?

C’mon guys.

Sure, but in keynote speak, "very soon" typically doesn’t mean "a year or more from today."

Is it just me or is "very soon" and delivering well in the future a common thing? Because it seems like every day I read an article on something just released when they announced it 4 years ago.

Depends if we’re talking about video game production timelines or not. Blizzard (in)famously throws "soon" around a lot. Hence: soon™

Clearly, you’ve never played any games from Blizzard. Soon™ should be their byline.

It seems obvious that Google ran into problems and won’t release it until they can achieve near uniform results. They may also have concerns about unsavory use of the technology, say when paparazzi are spying on celebrities who may be sunbathing sans-apparel.

I just wish there was communication about it.

What we could really use is some articles for "new battery tech" that is "coming soon"

Demos aren’t products. I don’t think it is unfair to dismiss things that are prototypes until they make it to market.

I read it more as tempering the hype. Everyone is amazed by the AI doing calls for you right now. I’ve seen quite a few articles popup from various non-tech news agencies.

I didn’t see it as a dismissal so much as a "don’t be bummed if this takes some time to go live" sort of thing.

…we can do the hard work, remove the obstruction, and have the picture of what matters to you in front of you.

Or you know… don’t be a lazy photographer and don’t let something like a fence getting in the way? Come on…

well.. you can’t take that photo from the diamond.. The fence is there by design. You could climb over or walk around, but then you wouldn’t have the shot anymore, would you? It’s not "lazy" to take what you can get and try to salvage from there. A less than ideal photo of a thing is easier to work with than no photo at all.

Totally true that sometimes it’s hard to get a good shot without something in the foreground or background. Having this technology would be great!
But, in the case of a chainlink fence, just go right up to it and take the shot with the lens positioned in the open space between links – I’ve often done this, and it works pretty well. Of course, that doesn’t help if you’ve already taken the shot, or just didn’t have time to move, which is why it would be great if Google can make this happen.

Not sure about where your kid plays baseball/softball, but where I live there is a single stands section/thing that is parked between third and home (closer to third) that is for one team and an identical stand for the other team between home and first (closer to first). These can be packed. And if there’s a parent constantly jumping up to go put their phone directly next to the chain link fence to take a picture without the chains in the shot, I imagine a lot of frustrated parents. And if a foul ball happens to go to the spot the phone is at while you’re making a video of your kid at bat, well, maybe you’ll make enough off the youtube video to buy a new one.

Or, you know we don’t always have a degree in photography. Even then when you can finally get the perfect angle of picture of your daughter at a choir concert, after negotiating space and trying not to be in the way of other parents trying to do the same, you might find that you have a fantastic picture of her, but that she’s framed on the sides by a microphone stand and cord

i would guess that it’s more an issue of UI – currently, Photos doesn’t give the user any power to cause any of these AI-driven features…they are surface automatically in the Assistant tab. Sometimes I want a specific thing done, but Assistant never picks it up and there is no way for me to tell it to. For something like object removal, it’s not just about can Google remove it, but also what should Google remove?

They demoed a new interaction where a one-tap edit button appears when viewing an individual photo. This kind of thing would be a first step to allowing these richer AI-driven edits. What they need after that is a way for me to select the exact area or object I want the AI to manipulate.

You can do some of the Assistant auto-creations manually. The one I use most often is GIF, which just takes selecting multiple photos and tapping the + icon.

To be fair, that’s not really an AI driven feature

Just like AirPlay 2, Messages on iCloud, AirPower, etc.

They show up in the betas, and then get removed in the final releases. iOS 11.4 is our next shot. Hopefully, AirPlay 2 and Message in iCloud stay in.

You can already copy text from Google Lens, at least it works in Android P.

Well any android update is always a year later… People have Samsung phones…

Instead we get Clips.

Fair enough.
This object removal demo worked probably fairly well on examples but in production has too much failure cases and not enough useful ones. Usually you avoid obstacles when taking picture

I actually would think that this is more related to various software patents other firms have on this kind of thing. Adobe’s content aware fill as well as other firms’ tech for this may have the technology locked up so that Google can’t provide it without heavy licensing fees.
You know, patents like this one:
Or this one:

There are a lot of reasons why Google may not be able to provide this kind of functionality inside of Google Photos, they’re not necessarily technical in nature but it may be very difficult due to patents.

I suspect that Google is perfectly capable of doing this in Google Photos but they can’t really afford to license the needed patents to offer it in a free product. Sort of like why Youtube encodes all their videos in VP5 because the licensing fees for MP4 were just too high.

Ah, and remember Pixel Buds? The ohhs and ahhs turned out to be a whimper after a year. Sad because it was a great concept had it worked flawlessly

View All Comments
Back to top ↑