clock menu more-arrow no yes

Filed under:

New report looks for patterns in Twitter harassment

New, 4 comments

Twitter harassment is a problem, but how do you even judge its scope, let alone fix it? That's a question that feminist activist group Women, Action, and the Media set out to answer last year. The group got approval from Twitter to accept and submit harassment reports, giving them a chance to analyze them and put together a picture of who is complaining about accounts or tweets, what they're looking at, and how Twitter might be able to help.

The report, which is currently available on WAM's site, follows a Pew survey about general hostile behavior online. That study found that, among other things, around a quarter of young men and women had been physically threatened online, and a quarter of young women had been sexually harassed. WAM focused specifically on the 811 harassment reports it received over the course of three weeks in November, looking for patterns. While the report is framed around the harassment of women, the results are far more general.

Of all its reports, WAM judged 317 genuine — reports from a bot skewed the overall numbers — and escalated 161 to Twitter. It's not a huge sample, but it gives some interesting details. About a quarter of the reports concerned "hate speech" like racist, sexist, or homophobic comments, and a slightly smaller number involved "doxxing" or releasing private details about a person. Actual threats of violence were lower down the list; they made up 12 percent of reports. Not all of this is banned by Twitter, but the platform prohibits threats, posting confidential information, and abuse, including "promoting violence" against people or groups.

WAM Twitter harassment

Submitting a report, though, apparently isn't always straightforward. The study notes that first-time users may not send vital things like a tweet's web address, and may assume Twitter has far more investigatory power than it does. Others might need a way to make clear that one person is harassing them from several different accounts, making a new one each time the old one is banned. They found "doxxing" particularly difficult to address, because it's common to post tweets with personal information briefly and then delete them, wiping away evidence of the abuse. Even if a viewer captures screenshots, Twitter asks specifically for URLs.

Suspending an account might not always be a good idea

Twitter ended up suspending, warning, or (once) deleting accounts in response to a little over half the reports. In cases of doxxing, WAM thinks that number is being lowered by the problems with preserving evidence. Evidence, in fact, is a major theme of the paper. While account suspension was one of Twitter's primary strategies, it might not always be the best one — if the tweets are actually illegal, a suspended account means users can't show messages to law enforcement.

WAM found that one of its best tools was simply communicating with people who reported harassment to figure out the larger context behind a reported tweet or account. That's not a solution that will necessarily scale, but anything that makes the process clearer or more detailed could help. As many have noted, one of the best solutions so far also isn't stopping harassers from tweeting, it's letting their victims tune them out. In its recommendations, WAM suggests opt-in filters that people could turn on to limit what they receive. Twitter has already made several changes to its reporting tools since November.

Harassment is a problem across the internet, not just on Twitter — 17 percent of the reports mentioned problems on other platforms. And while platforms like Twitter and Reddit are trying to figure out how to address it with nuance, it can be hard to pin down exactly how much to police a network, and where criticism turns into harassment. This report can't solve that, but it can give us more insight into what works.