Skip to main content

X continues to suck at moderating hate speech, according to a new report

X continues to suck at moderating hate speech, according to a new report


Out of 200 posts regarding the Israel-Hamas war, 98 percent still remained live on X a week after they were reported to moderators for hateful content.

Share this story

Twitter’s “X” logo on a purple and blue background
This follows earlier reports that verified X users are “superspreaders of misinformation” regarding the Israel-Hamas war.
Illustration: The Verge

The Center for Countering Digital Hate (CCDH) released a new report on Tuesday that suggests X (formerly Twitter) is failing to remove posts that violate its own community rules regarding misinformation, antisemitism, Islamophobia, and other hate speech. Researchers in the CCDH study reported 200 “hateful” posts about the Israel-Hamas war that breached platform rules to X moderators on October 31st, finding that 98 percent of the posts still remained live after allowing seven days to process the reports.

According to the CCDH, the reported posts, which largely promoted bigotry and incited violence against Muslims, Palestinians, and Jewish people, were collected from 101 separate X accounts. Just one account was suspended over their actions, and the posts that remained live accrued a combined 24,043,693 views at the time the report was published. This follows an earlier report from the CCDH in September regarding hate speech in which X claimed the organization misrepresented how many users had viewed the hateful content. X filed a lawsuit against the CCDH in July earlier this year over claims the organization “unlawfully” scraped X data to create “flawed” studies about the platform.

Forty-three of the 101 X accounts in the CCDH study were verified, meaning the platform’s algorithm boosts the visibility of their posts

It’s also worth noting that 43 of the 101 X accounts in the study were verified. Users who pay the $8 monthly subscription to X Premium for user verification also benefit from algorithmic boosts that improve the visibility of their posts, leading other studies to suggest that verified X users are “superspreaders of misinformation.”

In a statement to The Verge, X’s head of business operations, Joe Benarroch, said that the company was made aware of the CCDH’s report yesterday and directed X users to read a new blog post that details the “proactive measures” it has taken to maintain the safety of the platform during the ongoing Israel-Hamas war, including the removal of 3,000 accounts tied to violent entities in the region and taking action against over 325,000 pieces of content that violate its terms of service. X did not clarify in this blog how long it took to remove offending posts and accounts after they were reported.

“The majority of actions that X takes are on individual posts, for example by restricting the reach of a post,” said Benarroch. X claims that by choosing to only measure account suspensions, the CCDH has not accurately represented its moderation efforts and urged the organization to “engage with X first” to ensure the safety of the X community.

After publication, Benarroch questioned the methodology of the CCDH’s study and claimed the organization only considers a post “actioned” after the account has been suspended. The CCDH has confirmed to The Verge that this is not the case.

Update November 14th, 11AM ET: Article updated to include a statement from X in response to the CCDH report.

Update November 14th, 4PM ET: Article updated to include a response from the CCDH regarding X’s statement.