Skip to main content

Google is using AI to create stunning landscape photos using Street View imagery

Google is using AI to create stunning landscape photos using Street View imagery

/

Google’s AI photo editor tricked even professional photographers

Share this story

This photo was compiled and edited by an artificial intelligence system using just Street View imagery.
This photo was compiled and edited by an artificial intelligence system using just Street View imagery.
Photo: Google

Google’s latest artificial intelligence experiment is taking in Street View imagery from Google Maps and transforming it into professional-grade photography through post-processing — all without a human touch. Hui Fang, a software engineer on Google’s Machine Perception team, says the project uses machine learning techniques to train a deep neural network to scan thousands of Street View images in California for shots with impressive landscape potential. The software then “mimics the workflow of a professional photographer” to turn that imagery into an aesthetically pleasing panorama.

Google is training AI systems to perform subjective tasks like photo editing

The research, posted to the pre-print server arXiv earlier this week, is a great example of how AI systems can be trained to perform tasks that aren’t binary, with a right or wrong answer, and more subjective, like in the fields of art and photography. Doing this kind of aesthetic training with software can be labor-intensive and time-consuming, as it has traditionally required labeled data sets. That means human beings have to manually pick out which lighting effects or saturation filters, for example, result in a more aesthetically pleasing photograph.

Fang and his team used a different method. They were able to train the neural network quickly and efficiently to identify what most would consider superior photographic elements using what’s known as a generative adversarial network. This is a relatively new and promising technique in AI research that pits two neural networks against one another and uses the results to improve the overall system.

Google’s software takes a Street View panorama and crops the photo, applies lighting and coloration changes, and then chooses a filter to apply in a four-step process.
Google’s software takes a Street View panorama and crops the photo, applies lighting and coloration changes, and then chooses a filter to apply in a four-step process.
Photo: Google

In other words, Google had one AI “photo editor” attempt to fix professional shots that had been randomly tampered with using an automated system that changed lighting and applied filters. Another model then tried to distinguish between the edited shot the original professional image. The end result is software that understands generalized qualities of good and bad photographs, which allows it to then be trained to edit raw images to improve them.

To test whether its AI software was actually producing professional-grade images, Fang and his team used a “Turing-test-like experiment.” They asked professional photographers to grade the photos its network produced on a quality scale, while mixing in shots taken by humans. Around two out of every five photos received a score on par with that of a semi-pro or pro, Fang says.

Photo: Google
Photo: Google
Photo: Google

“The Street View panoramas served as a testing bed for our project,” Fang says. “Someday this technique might even help you to take better photos in the real world.” The team compiled a gallery of photos its network created out of Street View images, and clicking on any one will pull up the section of Google Maps that it captures. Fang concludes with a neat thought experiment about capturing photos in the real world: “Would you make the same decision if you were there holding the camera at that moment?”