Skip to main content

The NYT and Canadian experts say Twitter is not doing enough to curb child exploitation

The NYT and Canadian experts say Twitter is not doing enough to curb child exploitation


Even though Elon Musk says cracking down on child abuse material is ‘priority #1,’ a report from The New York Times indicates otherwise.

Share this story

Illustration of a black Twitter bird in front of a red and white background.
Illustration by Alex Castro / The Verge

According to a report from The New York Times, child sexual abuse imagery (CSAM) still persists on Twitter, despite Elon Musk stating that cracking down on child exploitation content is “priority #1” for the company.

While working with the Canadian Centre for Child Protection, which helped match abusive images to its CSAM database, the Times says it uncovered content across Twitter that was previously flagged as exploitative, as well as accounts saying they could sell more.

During its search, the Times says it found images containing 10 child abuse victims in 150 instances “across multiple accounts” on Twitter. Meanwhile, the Canadian Centre for Child Protection had similarly disturbing results, uncovering 260 of the “most explicit videos” in its database on Twitter, which garnered over 174,000 likes and 63,000 retweets in total.

Twitter reportedly promotes CSAM through its recommendation algorithm

In November, Musk cut 15 percent of its trust and safety staff — the team that handles content moderation — and stated it wouldn’t impact moderation. Ella Irwin, Twitter’s new trust and safety head, has since been touting the platform’s efforts to crack down on CSAM.

According to the Times, Twitter actually promotes some of the images through its recommendation algorithm that surfaces suggested content for users. The platform reportedly only took down some of the content after the Canadian center notified the company.

Earlier this month, Twitter said it’s “proactively and severely limiting the reach” of CSAM content and that the platform will work to “remove the content and suspend the bad actor(s) involved.” The company claims it suspended around 404,000 accounts that “created, distributed, or engaged with this content,” a 112 percent increase since November.

Meanwhile, Musk continues to relax Twitter’s enforcement against bad actors with a “general amnesty” policy that brought back the suspended accounts of the most toxic people on the platform. Twitter also recently stated that it will take “less severe actions” against accounts that break the rules and started letting all Twitter users appeal account suspensions last week.

“The volume [of CSAM] we’re able to find with a minimal amount of effort is quite significant,” Lloyd Richardson, the Canadian center’s technology director, tells the Times. “It shouldn’t be the job of external people to find this sort of content sitting on their system.”