Skip to main content

YouTube is cracking down on AI-generated true crime deepfakes

YouTube is cracking down on AI-generated true crime deepfakes


The platform’s harassment and cyberbullying policy will prohibit content that “realistically simulates” deceased children and victims of crimes or deadly events.

Share this story

Illustration of a YouTube logo with geometric background
Illustration by Alex Castro / The Verge

YouTube is updating its cyberbullying and harassment policies and will no longer allow content that “realistically simulates” minors and other victims of crimes narrating their deaths or the violence they experienced.

The update appears to take aim at a genre of content in true crime circles that creates disturbing AI-powered depictions of victimsincluding children — that then describe the violence against them. Some of the videos use AI-generated, childlike voices to describe gruesome violence that occurred in high-profile cases. Families of victims depicted in the videos have called the content “disgusting.”

YouTube’s policy update will result in a strike that removes the content from a channel and also temporarily limits what a user can do on the platform. A first strike, for example, limits users from uploading videos for a week, among other things. If the policy is violated again within 90 days, penalties increase, with the eventual possibility of having the entire channel removed.

Platforms including YouTube have in recent months unveiled AI-driven creation tools, and along with them new policies around synthetic content that could confuse users. TikTok, for one, now requires creators to label AI-generated content as such. And YouTube itself announced a strict policy around AI voice clones of musicians — with another set of looser rules for everyone else.