Skip to main content

Twitter now bans dehumanizing remarks based on age, disability, and disease

Twitter now bans dehumanizing remarks based on age, disability, and disease


Expanding policy updates around dehumanizing hate speech

Share this story

The Twitter bird logo in white against a dark background with outlined logos around it and red circles rippling out from it.
Illustration by Alex Castro / The Verge

Twitter has updated its hate speech policies to cover tweets that make dehumanizing remarks, which are remarks that treat “others as less than human,” on the basis of age, disability, or disease. The changes follow updates to the company’s polices made last July that said Twitter would remove tweets that dehumanize religious groups.

Prior to that, Twitter issued a broad ban in 2018 on dehumanizing speech to compliment its existing hate speech policies that cover protected classes like race and gender. It has since been updating these dehumanization policies to take into account specific cases its original ruleset failed to address, based on user feedback.

Now, Twitter says tweets like the ones in the image below will be removed when they are reported:

Image: Twitter

The company says reported tweets in violation of these new polices but posted before today will be removed but won’t result in account suspensions.

Twitter first rolled out policies banning dehumanizing speech in September 2018. At the time, Twitter asked for feedback and later said it received more than 8,000 responses across more than 30 countries in just two weeks time. Much of the feedback centered around the policies being too broad. So Twitter has begun calling out specific types of speech against specific groups as against its rules, starting with religion and now age, disability, and disease.

In a tweet, the company indicates that more groups will eventually be protected by this policy:

Twitter also does not allow misgendering or naming transgender people by the name they used before they transitioned, also called “deadnaming,” a policy put in place in late 2018. The company said in October 2019 that its automated moderation tools now flag and remove more than half of all abusive tweets before users report them.