Skip to main content

Twitter is testing an anti-harassment feature that flags some accounts as ‘sensitive’

Twitter is testing an anti-harassment feature that flags some accounts as ‘sensitive’


It doesn’t seem to work very well

Share this story

Twitter stock image

Twitter’s latest attempt to make its platform less hostile is a “sensitive account” system whereby some users’ profiles are flagged as containing “potentially sensitive images or language.” The warning takes up the entire profile page, requiring users to click or tap a prompt to agree to view the profile.

The idea, it seems, is to gate some users’ accounts behind this warning, either as a way mediate the behavior of users or as way to wall these accounts off from the general Twitter populace. A company spokesperson confirmed to The Verge that this is just a test “as part of our broader efforts to make Twitter safer.”

Twitter is falling into its own traps again

But like with most of Twitter’s anti-harassment measures, there’s a noticeable lack of transparency and a fair amount of obfuscation as to how accounts are deemed sensitive. Both Mashable and Gizmodo collected tweets from users who were unaware their profiles were displaying this warning until other Twitter users pointed it out. Twitter doesn’t appear to have any sort of review or appeals process in place.

It’s also unclear if an account is deemed sensitive through user reports or some other automated method. This lack of information on the process opens up the possibility that well-meaning and non-abusive Twitter users could have their accounts wrongly flagged as sensitive if enough trolls report them, or if Twitter’s own algorithms mistakenly identity some shared images or videos as inappropriate. These missteps are reminiscent of a recent debacle in which Twitter was forced to roll back a change to how users are notified about public lists they’re put on, after widespread complaints that it would make it harder, not easier, to combat harassment.

On a more fundamental level, the sensitive account system is also indicative of Twitter’s broader light-touch approach to harassment. The company would much rather try and hide abusive and offensive behavior than actually police activity on its platform with bans and real-name requirements, for fear it could jeopardize its position as a protector of free speech. That’s why it keeps adding new filtering and muting options, like last week’s addition of a new optional feature that will keep default “egg” accounts out of users’ mentions.

However, the company is getting more proactive after announcing back in January that it would be issuing “long overdue” harassment fixes. Those have included temporary shadow bans for abusive accounts, which restrict users’ tweets to only those accounts that follow them, and more transparency around how Twitter officially reports abuse.