Skip to main content

Twitter releases new policy to ban dehumanizing speech

Twitter releases new policy to ban dehumanizing speech

/

Users have two weeks to weigh in on the new rule

Share this story

Twitter’s blue bird silhouette logo is seen on a black background.
Illustration by Alex Castro / The Verge

Twitter has released a new moderation policy explicitly banning dehumanizing speech. In a post on Tuesday morning, Twitter executives Del Harvey and Vijaya Gadde described the proposed rule as part of an ongoing effort to promote healthy conversations on Twitter and limit real-world harms stemming from discourse on the platform.

“Language that makes someone less than human can have repercussions off the service, including normalizing serious violence,” the post reads.

Once the new change is in effect, a new clause will be added to the Twitter Rules: “You may not dehumanize anyone based on membership in an identifiable group, as this speech can lead to offline harm.” The policy then defines dehumanization:

Dehumanization: Language that treats others as less than human. Dehumanization can occur when others are denied of human qualities (animalistic dehumanization) or when others are denied of human nature (mechanistic dehumanization). Examples can include comparing groups to animals and viruses (animalistic), or reducing groups to their genitalia (mechanistic).

Identifiable group: Any group of people that can be distinguished by their shared characteristics such as their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, serious disease, occupation, political beliefs, location, or social practices.

As noted in the post, the new rules have some overlap with more common provisions against hate speech or racism, but these can be applied to all groups rather than just specific protected classes. In the past, Facebook has drawn criticism for moderation rules that treated racial comments as objectionable only when dealing with race in isolation. Under the resulting rules, additional modifiers created loopholes, which allowed speech against black children but not white men. An explicit policy on dehumanizing speech could also address situations where social networks have fueled racial violence, as in Myanmar, Sri Lanka, and other countries. WhatsApp recently hired thousands of offshore moderators to improve enforcement, but it has not changed its policies on acceptable speech.

Twitter is hosting an open comment period on the rule before it goes into effect, a practice seemingly modeled after the US federal rule-making process. Users with opinions or concerns about the new rule are encouraged to submit concerns or “examples of speech that contributes to a healthy conversation, but may violate this policy.” The comment period will remain open until the morning of October 9th.