Skip to main content

Adobe is using machine learning to make it easier to spot Photoshopped images

Adobe is using machine learning to make it easier to spot Photoshopped images

/

New research uses AI to automate traditional digital forensics

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

A famous edited image of a missile launch released by the Iranian government in 2008. (This image was not used in the training or testing of Adobe’s research project.)
A famous edited image of a missile launch released by the Iranian government in 2008. (This image was not used in the training or testing of Adobe’s research project.)

Experts around the world are getting increasingly worried about new AI tools that make it easier than ever to edit images and videos — especially with social media’s power to share shocking content quickly and without fact-checking. Some of those tools are being developed by Adobe, but the company is also working on an antidote of sorts by researching how machine learning can be used to automatically spot edited pictures.

The company’s latest work, showcased this month at the CVPR computer vision conference, demonstrates how digital forensics done by humans can be automated by machines in much less time. The research paper does not represent a breakthrough in the field, and it’s not yet available as a commercial product, but it’s interesting to see Adobe — a name synonymous with image editing — take an interest in this line of work.

Speaking to The Verge, a spokesperson for the company said that this was an “early-stage research project,” but in the future, the company wants to play a role in “developing technology that helps monitor and verify authenticity of digital media.” Exactly what this might mean isn’t clear, since Adobe has never before released software designed to spot fake images. But, the company points to its work with law enforcement (using digital forensics to help find missing children, for example) as evidence of its responsible attitude toward its technology.

An illustration from Adobe’s new paper showing how edits in images can be spotted by a machine learning system.
An illustration from Adobe’s new paper showing how edits in images can be spotted by a machine learning system.

The new research paper shows how machine learning can be used to identify three common types of image manipulation: splicing, where two parts of different images are combined; cloning, where objects within an image are copy and pasted; and removal, when an object is edited out altogether.

To spot this sort of tampering, digital forensics experts typically look for clues in hidden layers of the image. When these sorts of edits are made, they leave behind digital artifacts, like inconsistencies in the random variations in color and brightness created by image sensors (also known as image noise). When you splice together two different images, for example, or copy and paste an object from one part of an image to another, this background noise doesn’t match, like a stain on a wall covered with a slightly different paint color.

As with many other machine learning systems, Adobe’s was taught using a large dataset of edited images. From this, it learned to spot the common patterns that indicate tampering. It scored higher in some tests than similar systems built by other teams, but not dramatically so. However, the research has no direct application in spotting deepfakes, a new breed of edited videos created using artificial intelligence.

“The benefit of these new ML approaches is that they hold the potential to discover artifacts that are not obvious and not previously known,” digital forensics expert Hany Farid told The Verge. “The drawback of these approaches is that they are only as good as the training data fed into the networks, and are, for now at least, less likely to learn higher-level artifacts like inconsistencies in the geometry of shadows and reflections.”

These caveats aside, it’s good to see more research being done that can help us spot digital fakes. If those sounding the alarm are right and we’re headed to some sort of post-truth world, we’re going to need all the tools we can get to sort fact from fiction. AI can hurt, but it can help as well.