Skip to main content

The history of online harassment before and after Gamergate with Caroline Sinders

The history of online harassment before and after Gamergate with Caroline Sinders

/

Interview episode of The Vergecast

Share this story

Illustration by Alex Castro / The Verge

Most discussions about online harassment begin and end with Gamergate, but online harassment campaigns were well underway in the 1990s. They’ve evolved to include life-threatening tactics, including doxxing and swatting. Caroline Sinders, a research fellow at the Digital Harvard Kennedy School and an expert in machine learning, joins Nilay Patel and Casey Newton on The Vergecast to talk about the origins online harassment, how platforms like Twitter and Facebook can be better designed to combat it, and what we as individuals can do to mitigate its effects.

Below is a brief, edited transcript of their conversation about how better designs can help content moderators enforce the rules and protect people from online harassment:

Caroline Sinders: Some of the tools that people are given to content moderate, from what we have seen and different leaked information about content moderators, is awful. What they have to work with is really, really not great. And it’s upsetting when you think about it from a design standpoint, when you think about how the beacons of modern design in the United States will be software design and it will be from major platforms, that what content moderators have to use to moderate platforms is really antagonistic. Outside of the content that they are forced to look at, the tools that they are given and the time frames that they’re given to analyze content is almost... I want to argue like a human rights violation.

Casey Newton: Wow.

They have under 10 seconds to make a decision. Sometimes what they’re looking at is extremely violent content. They have to look at it all day, and they have specific quotas that they often have to meet. So how do you build context out of that? How do you build context out of something like a harassment campaign that’s happening in Steubenville?

Mm-hmm.

How do you build in context to that? What people often have is like a checklist that they have to memorize. It’s based off policy that’s not great policy around defining harassment. And they have to make a split-second decision. And they have tools where there’s not enough shown to them to perhaps understand what they’re looking at. So, what are all the solutions here? The solutions are to perhaps redesign the way in which content moderators are engaging with content on the platforms. Make that experience better.