Skip to main content

Congress grapples with how to regulate deepfakes

Congress grapples with how to regulate deepfakes


And changes to Section 230 might be coming

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

House Speaker Nancy Pelosi Meets With House Democrats Over Growing Calls For Impeachment
Photo by Alex Wong/Getty Images

Top House Democrat Rep. Adam Schiff (D-CA) issued a warning on Thursday that deepfake videos could have a disastrous effect on the 2020 election cycle.

“Now is the time for social media companies to put in place policies to protect users from this kind of misinformation not in 2021 after viral deepfakes have polluted the 2020 elections,” Schiff said. “By then it will be too late.”

The warning came during a House Intelligence Committee hearing focused on analyzing the national and election security risks of the technology. The committee convened a panel of experts from universities and think tanks to prepare a deepfake strategy to guide new restrictions from both the government and platforms.

At the outset of the hearing, Schiff came out challenging the “immunity” given to platforms under Section 230 of the Communications Decency Act, asking panelists if Congress should make changes to the law that doesn’t currently hold social media companies liable for the content on their platforms.

Maryland Carey School of Law professor Danielle Keats Citron responded suggesting that Congress force platforms to judiciously moderate content in any changes to 230 in order to receive those immunities. “Federal immunity should be amended to condition the immunity on reasonable moderation practices rather than the free pass that exists today,” Citron said. “The current interpretation of Section 230 leaves platforms with no incentive to address destructive deepfake content.”

The hearing came only a few weeks after a real-life instance of a doctored political video, where footage was edited to make House Speaker Nancy Pelosi appear drunk, that spread widely on social media. Every platform responded to the video differently, with YouTube removing the content, Facebook leaving it up while directing users to coverage debunking it, and Twitter simply letting it stand.

Throughout the hearing, it became apparent that lawmakers were both mulling over possible legislation and searching for methods that the platforms could apply on their own to tackle the issue. Experts recommended everything from authenticating video at the source to requiring platforms to triage fake video takedowns, prioritizing those that start to go viral.

“Maybe, unfortunately, we have to tell people before they see something that [it’s] satire, it’s not real and you have to in some way verify it,” said Rep. Brad Wenstrup (R-OH), “which is kind of pathetic, but at the same time that may be what we have to do.”

There are a few pieces of legislation already circulating through Congress aimed at combating the threat of fabricated media. Sen. Ben Sasse (R-NE) has proposed rules that would make it unlawful for people to “maliciously” create and distribute deepfakes. On the other side of the aisle, Rep. Yvette Clarke (D-NY) introduced a bill Wednesday that would force the creators of deepfakes to disclose that they’re fabricated by including some identifier like a watermark.

Neither of those pieces of legislation have seen much traction, but today’s hearing showed that members of Congress are taking this threat seriously, doing their research and not veering off into partisan debates over platform bias.

As Rep. Val Demings (D-FL) put it, “the internet is the new weapon of choice.”