Twitter is experimenting with a new moderation tool that will warn users before they post replies that contain what the company says is “harmful” language.
Twitter describes it as a limited experiment, and it’s only going to show up for iOS users. The prompt that is now supposed to pop up in certain situations will give “you the option to revise your reply before it’s published if it uses language that could be harmful,” reads a message from the official Twitter Support channel.
The approach isn’t a novel one. It’s been used by quite a few other social platforms before, most prominently Instagram. The Facebook-owned app now warns users before they post a caption with a message that says the caption “looks similar to others that have been reported.” Prior to that change, Instagram rolled out a warning system for comments last summer.
It’s not exactly clear how Twitter is labeling harmful language, but the company does have hate speech policies and a broader Twitter Rules document that outlines its stances on everything from threats of violence and terrorism-related content to abuse and harassment. Twitter says it won’t remove something simply because it is offensive: “People are allowed to post content, including potentially inflammatory content, as long as they’re not violating the Twitter Rules,” the company says. But it does have the rule sets that allow it to carve out exceptions to its broad speech policies.
That said, this new experiment seems less concerned with curbing the more extreme forms of content for which Twitter might normally remove, suspend, or ban users. Instead, it seems more designed to lightly encourage users to avoid unnecessary and inflammatory language that escalates feuds and might lead to suspensions. After all, you can simply ignore Twitter’s warning and post the reply anyway. But perhaps with a little nudge, Twitter thinks at least some users might reconsider.