Skip to main content

Twitter will now remove tweets that dehumanize religious groups

Twitter will now remove tweets that dehumanize religious groups


The company hinted at the changes last year

Share this story

Twitter’s blue bird silhouette logo is seen on a black background.
Illustration by Alex Castro / The Verge

Twitter is announcing updates to its policies today to address hateful language directed at religious groups, which is a significant change in how the platform moderates against hate speech. The policy will go into effect starting today, with moderation practices immediately updated to enforce the new rules. If the new policy is successful, Twitter said it could apply a similar standard to other protected groups of people in the future.

Last year, Twitter put out a call for people to help rewrite its dehumanization policies, initially proposing a policy against dehumanizing “identifiable groups,” in general. The company received 8,000 responses from people in over 30 countries in the wake of the proposal, with much of the feedback suggesting that this category was too broad, and smaller groups needed to be defined. As a result, Twitter is testing out the policy with a ban on the dehumanization of religious groups, in particular.

The new policy lays out specific examples of content targeting members of religious groups that should be removed if reported. Tweets that dehumanize people on behalf of their religious alignment — for instance, referring to them as “rats,” “viruses,” and “filthy animals” — are now explicitly forbidden by the platform’s rules.

Examples of tweets that now violate Twitter policies.
Examples of tweets that now violate Twitter policies.

“We create our rules to keep people safe on Twitter, and they continuously evolve to reflect the realities of the world we operate within,” Twitter’s safety team wrote in a blog post. “Our primary focus is on addressing the risks of offline harm, and research shows that dehumanizing language increases that risk.”

Twitter has long struggled to detect and police harassment at scale, resulting in significant ongoing changes to the platform’s moderation policy. Late last month, the company announced that it will notify users when tweets posted by prominent political figures violated the platform’s rules. If a world leader tweets something harmful, the company will now place a gray box before the tweet telling users that the content was in violation of its policies. Users will then need to click the box before they will be able to view the content.