Twitter says it will now begin enforcing the new rules it announced last month to combat abuse and hateful conduct, including threats of violence and physical harm. The new rules expand policies to abusive or threatening content in usernames and profiles, and to accounts affiliated with hate groups both on and off platform.
Twitter has struggled with violent, offensive, or hateful content, even granting verification badges before removing them from prominent white nationalists as hate speech and abuse have proliferated on the platform. Twitter has also been criticized for the seemingly arbitrary way it enforces its rules and has previously said it plans to do a better job of responding to users’ reports of abuse.
Under Twitter’s policies, specific threats of violence, death, or disease to an individual or a group of people was already considered a violation. The new rules will apply to accounts including those that affiliate themselves with organizations that “use or promote violence against civilians to further their causes.” Twitter says it will require tweets that glorify violence or the perpetrators of a violent act to be removed, and will permanently suspend accounts that repeatedly violate this rule.
There is one notable exception: the policy changes don’t apply to military or government entities. That would seemingly give President Trump carte blanche to continue his threats against “the little rocket man,” and to continue promoting violent xenophobic videos favored by far-right extremists, even when they’ve been disproven as fake news.
We’ve updated our rules around abuse and hateful conduct as well as violence and physical harm. These changes will be enforced starting December 18. Read our updated rules here: https://t.co/NGVT3qGFvg— Twitter Safety (@TwitterSafety) November 17, 2017
Twitter will also permanently suspend accounts where profile information includes violent threats and racist or sexist tropes, that incite fear, or reduce people to less than human terms. It’s also classifying hateful imagery like logos, symbols, or images that are used to promote hostility against others based on race, religion, disability, sexual orientation, or ethnicity as “sensitive media.” Twitter says if these images appear in the header or profile images, it will require account owners to remove them.
“In our efforts to be more aggressive here, we may make some mistakes and are working on a robust appeals process,” Twitter said in a statement. “We’ll evaluate and iterate on these changes in the coming days and weeks, and will keep you posted on progress along the way.” Twitter is also planning to develop internal tools to identify violating accounts to supplement user reports. The enforcement of these rules will be a welcome step given how rampant offensive content can be on the platform, but the test will be whether or not Twitter will enforce them consistently.