Skip to main content

US lawmakers say AI deepfakes ‘have the potential to disrupt every facet of our society’

US lawmakers say AI deepfakes ‘have the potential to disrupt every facet of our society’

/

They’re asking the intelligence community to assess the threat from AI video manipulation

Share this story

Illustration by Alex Castro / The Verge

US politicians are getting increasingly worried about deepfakes — a new type of AI-assisted video editing that creates realistic results with minimal effort. Yesterday, a trio of lawmakers sent a letter to the Director of National Intelligence, Dan Coats, asking him to assess the threat posed to national security by this new form of fakery.

The letter says “hyper-realistic digital forgeries” showing “convincing depictions of individuals doing or saying things they never did” could be used for blackmail and misinformation. “As deep fake technology becomes more advanced and more accessible, it could pose a threat to United States public discourse and national security,” say the letter’s signatories, House representatives Adam Schiff (D-CA), Stephanie Murphy (D-FL), and Carlos Curbelo (R-FL).

Deepfakes have the potential for blackmail, misinformation, and more

The trio want the intelligence community to produce a report that includes descriptions of when “confirmed or suspected” deepfakes have been produced by foreign individuals (there are no current examples of this), and to suggest potential countermeasures.

In a press statement, Curbelo said: “Deep fakes have the potential to disrupt every facet of our society and trigger dangerous international and domestic consequences [...] As with any threat, our Intelligence Community must be prepared to combat deep fakes, be vigilant against them, and stand ready to protect our nation and the American people.”

This isn’t the first time lawmakers have raised this issue. Earlier in the year, senators Mark Warner (D-VA) and Marco Rubio (R-FL) warned that deepfakes should be treated as a national security threat. In a speech, Rubio said the technology could supercharge misinformation campaigns led by foreign powers, singling out Russia as a particular threat. 

“I know for a fact that the Russian Federation at the command of Vladimir Putin tried to sow instability and chaos in American politics in 2016,” said Rubio. “They did that through Twitter bots and they did that through a couple of other measures that will increasingly come to light. But they didn’t use this. Imagine using this. Imagine injecting this in an election.”

Deepfakes first came to prominence in 2016 when users on Reddit started using cutting-edge AI research to paste the faces of celebrities onto porn. The term itself doesn’t refer to any particular research, but is a portmanteau that combines “deep learning” with “fakes.” The phrase was first used by a Reddit user, but is slowly becoming synonymous with a wide-range of AI editing technology. Such tools can turn people into virtual puppets, syncing their mouths with someone else’s speech, or just making them dance like a pro.

A number of organizations, including university labs, startups, and even parts of the military, are examining ways to reliably detect deepfakes. These include methods like spotting irregular blinking patterns or unrealistic skin tone.

However, researchers agree that there’s no single method, and that whatever deepfake-spotting tool is created will soon be tricked by new versions of the technology. At any rate, even if there was an easy way to spot deepfakes, it wouldn’t necessarily stop the technology from being used maliciously. We know that from the spread of fake news on networks like Facebook. Even if it can be easily disproven, it can still convince those who want to believe.

Despite these challenges, getting the government involved is encouraging news. “This is a constructive step,” Stewart Baker, a former general counsel for the National Security Agency, told The Washington Post. “It’s one thing for academics and techies to say that deepfakes are a problem, another for the intelligence community to say the same. It makes the concern something that Congress can address without fear of being second-guessed on how big the problem is.”