Google and MIT’s new machine learning algorithms retouch your photos before you take them

It’s getting harder and harder to squeeze more performance out of your phone’s camera hardware. That’s why companies like Google are turning to computational photography: using algorithms and machine learning to improve your snaps. The latest research from the search giant, conducted with scientists from MIT, takes this work to a new level, producing algorithms that are capable of retouching your photos like a professional photographer in real time, before you take them.

The researchers used machine learning to create their software, training neural networks on a dataset of 5,000 images created by Adobe and MIT. Each image in this collection has been retouched by five different photographers, and Google and MIT’s algorithms used this data to learn what sort of improvements to make to different photos. This might mean increasing the brightness here, reducing the saturation there, and so on.

Using machine learning to improve photos has been done before, but the real advance with this research is slimming down the algorithms so that they are small and efficient enough to run on a user’s device without any lag. The software itself is no bigger than a single digital image, and, according to a blog post from MIT, could be equipped “to process images in a range of styles.”

A composition created by MIT showing the original 12-megapixel image (left) and the retouched version produced by the new algorithm (right).

This means the neural networks could be trained on new sets of images, and could even learn to reproduce an individual photographer’s particular look, in the same way companies like Facebook and Prisma have created artistic filters that mimic famous painters. Of course, it’s worth pointing out that smartphones and cameras already process imaging data in real time, but these new techniques are more subtle and reactive, responding to the needs of individual images, rather than applying general rules.

In order to slim down the algorithms, the researchers used a few different techniques. These included turning the changes made to each photo into formulae and using grid-like coordinates to map out the pictures. All this means that the information about how to retouch the photos can be expressed mathematically, rather than as full-scale photos.

“This technology has the potential to be very useful for real-time image enhancement on mobile platforms,” Google research Jon Barron told MIT. “Using machine learning for computational photography is an exciting prospect but is limited by the severe computational and power constraints of mobile phones. This paper may provide us with a way to sidestep these issues and produce new, compelling, real-time photographic experiences without draining your battery or giving you a laggy viewfinder experience.”

Will we be seeing these algorithms pop up in one of Google’s future Pixel phones? It’s not unlikely. The company has previously used its HDR+ algorithms to bring out more detail in light and shadow on mobile devices since the Nexus 6. And speaking to The Verge last year, Google’s computational photography lead, Marc Levoy, said that we’re “only beginning to scratch the surface” with this work.

Comments

This looks like amazing research, and I’m looking forward to seeing where this goes. But I hope Google is truly looking at a wide range of photography styles. This technique should open up options, rather than automatically homogenise photographic output.

I’ve found that a lot of their previous automatic enhancement has tended towards the high-HDR, oversaturated look which I’m really not keen on – the gross Auto-Awesome "creations" being a prime example.

There are definitely some instagrammers whose editing style I’d like to steal though, even if I can’t steal their talent!

Software can also only take an image so far. If the basic image info is not optimal, due to not optimal hardware, the result could still be meh. As far as I know a blurry picture cannot be undone by software. A low res picture cannot be made high res with high fidelity out of nothing. I think there is still room for improvement in the hardware. We should stop taking the road of this toxic fascination with slimmer phones and just add better optics and true lenses to the camera. Where are the days of the camera hardware revolution such as the Nokia 1020 and the Samsung galaxy zoom? If it is too bulky, just go motomod!? problems solved. It would still be nice to see the challenge if the bridge between mobile and digital compact camera quality can still be bridged. In my experience many software tweaked images from smartphone cameras look like a refined painting on canvas on closer inspection, whereas images from point and shoots and dslr have a more granular and natural look and feel.

I don’t see why software improvements can’t be done at the same time as hardware improvements. The people working on those aspects are completely different teams of specialists.

This article is talking about specialists from Adobe/MIT/Google. Of course it’s going to focus on the software side. I’m sure engineers from Sony/Samsung/Fuji are working on improvements to the physical components as well.

It’s funny how Google releases a statement about photography right after Vic Gundotra’s comment about Android not being able to capture images as great as the iPhone.

Oh, Google… I applaud you.

Yes, it’s actually very clever. They remind us that taking photos is meanwhile about so much more than the hardware, things involving machine learning for example. And also showing us nicely again that Google is way ahead there. At least i find that quite nice

that’s the problem, the only phone that is right there with the iphone is the one that they designed themselves, the other 200,000 android phone models suck.

Where are you getting that from? The Verge puts the S8 and U11’s cameras ahead of the iPhone’s, in addition to the Pixel’s. Plenty of other tech sites/photographers agree, and would also cite other Android phone cameras – like, say, the G6’s – as matching or besting the iPhone’s. Whatever advantage Apple’s absolute control over both the camera hardware and software might give them, they’re clearly not utilising it at the moment.

Your comment – much like that blog post – has absolutely no basis in the current reality of the smartphone market.

(Sent from my iPad, lest I be accused of fanboyism or whatever else.)

As does the S8, G6, and so on. The dude’s comment was just off-base—Android photography’s in really good place at the moment. The iPhone’s fine too, of course, but it’s far from the kind of walk-away winner he made it out to be; the top Android phones either match or best it.

I dont’ think the iphone has good hardware, what it has is good software that allows things like this article shows. Android is just a mess, I bought an Samsung android the day they announced raw photos, and guess what? never got a update that made it possible on a brand new phone. As much as I laugh at their "shot with an iphone" gimmick ads, they at least work on the camera constantly.

"what it has is good software that allows things like this article shows"

Considering that this article is about a piece of software that’s apparently being developed, at least in part, by Google, this seems an odd assertion to make. Like, that’s not to say that the iPhone’s camera software is bad, but still—weird thing to say.

I can’t really say anything about your anecdote, vague as it was.

as a professional photographer the only thing I can say is HOLY SHIT MY DAYS ARE NUMBERED.

View All Comments
Back to top ↑