How well can machines interpret beautiful landscapes? After all, some sceneries are beautiful because they tug at human emotions, something that machines lack. Other landscapes, desert dunes in particular, look like nudes to robot eyes. To help, on Monday Google introduced a neural image assessment to figure out the most aesthetically pleasing images.
The assessment uses a deep neural network trained with data labelled by humans. It’s been trained to predict what images a typical user might rate as technically good-looking or aesthetically attractive. It can potentially be used to intelligently photo edit, increase visual quality, or edit out perceived visual errors in an image, according to Google. Edits include recommendations for optimal levels of brightness, highlights, and shadows, which is similar to Adobe’s AI tools it showcased back in October (though Adobe’s AI can stitch a whole scene together).
The Google assessment draws upon reference photos if available, but if not, it uses statistical models to predict image quality. The goal is to get a quality score that will match up to human perception, even if the image is distorted. Google has found that the scores granted by the assessment are similar to scores given by human raters.
One day, the company hopes that AI will be able to help users sort through the best photos of many, or provide real-time feedback on photography. But for now, these models remain in-house as proofs of concept published in a Cornell research paper.