Skip to main content

Facebook contest reveals deepfake detection is still an ‘unsolved problem’

Facebook contest reveals deepfake detection is still an ‘unsolved problem’

/

But the company says deepfakes are not currently ‘a big issue’

Share this story

A stock privacy image of an eye.
Illustration by Alex Castro / The Verge

Facebook has announced the results of its first Deepfake Detection Challenge, an open competition to find algorithms that can spot AI-manipulated videos. The results, while promising, show there’s still lots of work to be done before automated systems can reliably spot deepfake content, with researchers describing the issue as an “unsolved problem.”

Facebook says the winning algorithm in the contest was able to spot “challenging real world examples” of deepfakes with an average accuracy of 65.18 percent. That’s not bad, but it’s not the sort of hit-rate you would want for any automated system.

Deepfakes have proven to be something of an exaggerated menace for social media. Although the technology prompted much handwringing about the erosion of reliable video evidence, the political effects of deepfakes have so far been minimal. Instead, the more immediate harm has been the creation of nonconsensual pornography, a category of content that’s easier for social media platforms to identify and remove.

Mike Schroepfer, Facebook’s chief technology officer, told journalists in a press call that he was pleased by the results of the challenge, which he said would create a benchmark for researchers and guide their work in the future. “Honestly the contest has been more of a success than I could have ever hoped for,” he said.

Examples of clips used in the challenge. Can you spot the deepfake?
Examples of clips used in the challenge. Can you spot the deepfake?
Video by Facebook

Some 2,114 participants submitted more than 35,000 detection algorithms to the competition. They were tested on their ability to identify deepfake videos from a dataset of around 100,000 short clips. Facebook hired more than 3,000 actors to create these clips, who were recorded holding conversations in naturalistic environments. Some clips were altered using AI by having other actors’ faces pasted on to their videos.

Researchers were given access to this data to train their algorithms, and when tested on this material, they produced accuracy rates as high as 82.56 percent. However, when the same algorithms were tested against a “black box” dataset consisting of unseen footage, they performed much worse, with the best-scoring model achieving an accuracy rate of 65.18 percent. This shows detecting deepfakes in the wild is a very challenging problem.

Schroepfer said Facebook is currently developing its own deepfake detection technology separate from this competition. “We have deepfake detection technology in production and we will be improving it based on this context,” he said. The company announced it was banning deepfakes earlier this year, but critics pointed out that the far greater threat to disinformation was from so-called “shallowfakes” — videos edited using traditional means.

The winning algorithms from this challenge will be released as open-source code to help other researchers, but Facebook said it would be keeping its own detection technology secret to prevent it from being reverse-engineered.

Schroepfer added that while deepfakes were “currently not a big issue” for Facebook, the company wanted to have the tools ready to detect this content in the future — just in case. Some experts have said the upcoming 2020 election could be a prime moment for deepfakes to be used for serious political influence.

“The lesson I learned the hard way over the last couple of years, is I want to be prepared in advance and not be caught flat footed,” said Schroepfer. “I want to be really prepared for a lot of bad stuff that never happens rather than the other way around.”