Skip to main content

Google is using machine learning to reduce the data needed for high-resolution images

Google is using machine learning to reduce the data needed for high-resolution images

/

The company says its techniques reduce data costs up to 75 percent per image

Share this story

Google Pixel and Pixel XL sample photography

Last November, Google unveiled a prototype technology called RAISR that uses machine learning to make low-resolution images appear more detailed. Now, the company has begun the process of integrating RAISR with its online services, using the technology to upscale large images on Google+ and save users’ data in the process.

So far, RAISR is only being used to tweak high-resolution Google+ images accessed on a “subset of Android devices.” When a user requests an image, Google+ retrieves a version that’s actually a quarter of the size, using RAISR’s algorithms to “restore detail on [the] device.” Doing so reduces the data cost of each image by up to 75 percent, says Google. The technique is currently being applied to more than a billion images a week, and the company says doing so has reduced “users’ total bandwidth by about a third.”

An example of how RAISR upscales images
An example of how RAISR upscales images
Google

The machine learning technique in question works in a similar way to most upsampling methods — inserting new pixels into low-resolution images to make up for lost detail. But, while traditional upsampling uses fixed rules to work out which new pixels to use where, RAISR adapts its methods to each image. It also pays special attention to “edge features” (bits of the image which look like the edge of an object), making the resulting larger image look less blurred.

Google isn’t alone in using machine learning to upscale images. In June last year, Twitter bought UK AI startup Magic Pony, which uses similar techniques to improve the resolution of low-quality videos. In both cases, it’s perfect for this sort of technology. There are abundant images and videos for algorithms to train on, and the resulting technology offers consumers a simple and tangible benefit in the form of lower data costs. Artificial intelligence isn’t all useless then.