Skip to main content

TikTok is banning deepfakes to better protect against misinformation

TikTok is banning deepfakes to better protect against misinformation


TikTok joins scores of other social platforms that have banned manipulated video

Share this story

The TikTok logo
Illustration by Alex Castro / The Verge

TikTok is establishing new moderation policies today to better protect its platform against misinformation, election interference, and other forms of manipulative content ahead of the 2020 election. The company, which is currently embroiled in an unprecedented acquisition negotiation with Microsoft following threats of a US ban, says it now explicitly bans deepfakes. Deepfakes are artificial intelligence-powered manipulations of audio and video designed to mislead people about something someone may have done or said.

“We’re adding a policy which prohibits synthetic or manipulated content that misleads users by distorting the truth of events in a way that could cause harm,” Vanessa Pappas, the general manager of TikTok’s US operation, writes in a blog post published Wednesday. “Our intent is to protect users from things like shallow or deep fakes, so while this kind of content was broadly covered by our guidelines already, this update makes the policy clearer for our users.”

Deepfakes are most commonly associated with face-swapping videos used to create pornographic content. They do not presently appear to be in use by any major political campaign, although the White House and President Trump’s reelection campaign have created or shared less sophisticated edits and other deceptive content, often taken from right-wing meme makers, that has been labeled similarly misleading.

“Our intent is to protect users from things like shallow or deep fakes.”

Still, the techniques to create deepfakes have only grown in sophistication and ease of use in recent years, and the technology has raised concerns deepfakes may at some point be used to to make deceptive edits of politicians saying or expressing support for ideas that may discredit them or harm their reputations. Many major social media platforms and even some states have banned deepfakes for use in political advertising as a result.

The policies used to ban deepfakes are often broad enough to also include any manipulated video intended to cause harm, as TikTok’s new policy is crafted. That makes these deepfake bans less about the specific AI-powered tech itself and more about defending against the use of deceptive video of any kind to tarnish political opponents online. Those types of less sophisticated edits, especially those favored by the Trump campaign, fall under a new category Pappas mentioned above: shallow fakes, or not quite full-blown AI-powered video or audio generation, but instead selective and deceptive editing that can sometimes be equally as harmful.

TikTok already bans political ads, but the company says this deepfake ban is intended to make it even harder to use its platform to push deceptive media for political gain. As part of the same rollout of new moderation policies, TikTok is making more explicit its ban on “coordinated inauthentic behavior,” which is the use of fake and bot accounts that mislead people about the identity of the account holder with the purpose of trying to sway public opinion or exert some other form of influence like sowing political divisions over hot-button topics.

TikTok says it’s also expanding its fact-checking partnerships with PolitiFact and Lead Stories “to fact check potential misinformation related to the 2020 US election,” and it’s adding an election misinformation option to its in-app reporting mechanism to let users flag suspicious content or accounts. The app will also have a new “election information center” to point users toward reputable information on the race, voting, and other related topics. The company also highlighted its partnership with the Department of Homeland Security’s Countering Foreign Influence Task Force as another front in its fight against foreign interference.