Skip to main content

What should we do about all these white supremacists on Twitter?

What should we do about all these white supremacists on Twitter?

/

The election has given Nazis a portal to the mainstream

Share this story

Twitter Logo night stock 1020

Today, the Anti-Defamation League released a new report on anti-semitic harassment of journalists, detailing more than 20,000 instances over the course of a year. The report is worth reading in full, but the conclusion is fairly simple: there are too many white supremacists on Twitter. They’ve become a persistent part of the platform, and raise real questions about Twitter’s future as a company. Harassment is a far broader problem than just racists, but over the past few months, the most visible form of abuse on Twitter has been anti-semitic trolls colliding with reporters. Response tweets, often tagging Jack Dorsey directly, have become a mainstay of political Twitter.

Anti-semitic trolling has been present in certain murky corners of the web for a long time, but the combination of Twitter and the 2016 election has allowed it to bubble into the mainstream. It was easy to ignore the white supremacists when they stuck to 4chan, but with a white-supremacist-adjacent presidential candidate, it’s hard to keep them confined to the sewer. Suddenly, those racists have a portal to the mainstream, and the portal doesn’t seem likely to close after the election. So what should we do about it?

No one wants to hang out somewhere they might be swarmed by genocidal frogs

White supremacists present a number of immediate problems for Twitter, beyond the obvious moral repulsion. The two numbers most crucial to Twitter’s survival are ad revenue and monthly active users, both of which have been more or less stagnant for the past year. White supremacists have a direct negative effect on both: nobody wants to advertise or even hang out in a place where they might be swarmed by genocidal frogs. As Twitter looks to be acquired, at least one potential buyer has already backed away over these issues, as Disney decided the platform’s reputation for toxic harassment wouldn’t sit well with Mickey Mouse.

There are also more existential problems. Open discourse has always been a central part of Twitter’s pitch. It’s a network where people express and criticize beliefs, where insiders and outsiders can trade ideas on equal terms in a way that simply doesn’t happen on Instagram or Snapchat. But that’s what the white supremacists like about Twitter, too. There’s no obvious way to keep them out.

The harassment calls the entire Twitter project into question

Ideally, you’d want some sort of free exchange of ideas without white supremacists, but mandating that would present all the usual problems that come with banning a particular kind of speech. At the same time, if free exchange of ideas turns out to be inseparable from racist trolling, it calls the entire Twitter project into question. Technologists like to believe they’re improving the world — in Twitter’s case, building mutual understanding and tolerance through open discourse — but Twitter Nazis are doing their best to prove the techno-optimists wrong. If they succeed, there’s no point in maintaining Twitter-as-discourse at all. You might as well plow the whole thing under and focus on providing a safe space for Pepsi to engage with potential Pepsi-drinkers.

The problem is related to Twitter’s well-documented harassment issues, but at this point, even a vigorous anti-harassment system might not be enough to drive out white supremacists. In the abstract, harassment is non-ideological — a behavior rather than a set of ideas. The best anti-harassment tools (like robust blocking or vigorous enforcement of the ban on violent threats) work independent of ideology. White supremacists do love Twitter harassment, and they’ve found their way into lots of different campaigns over the years — but even if all the Nazis stop harassing people, they’ll still be Nazis. A decent subset of the white supremacist harassment on Twitter isn’t even direct threats, just earnest expression of a violently racist ideology. Those tweets might slip past even a strong and consistent banning regime.

So far, Twitter has been reluctant to get into the Nazi-fighting business. It’s not hard to understand why. Policing hate speech is a notoriously difficult task, and having a major presidential candidate comparing Muslim refugees to poisonous snakes doesn’t make it any easier. As much flak as Twitter gets for failing to ban white supremacists, each ban stirs up an equal amount of grief, from Turkish dissidents to gaudy brand-builders. If Twitter took a more active banning role beyond what’s legally required, it would find itself in the same position as Facebook, with every deactivation a reminder of the platform’s stifling control over the open discourse it claims to promote. In Twitter’s case, that heavy hand would come without a billion-dollar ad business to support it.

Policing hate speech is notoriously a difficult task

At the end of those fights, Twitter would be faced with the difficult question of which ideologies are so unpleasant that they can’t be allowed on the platform at all — and there’s reason to fear it may not be left to the company to decide. Twitter is currently facing a lawsuit that seeks to hold the company liable for an ISIS attack in Jordan, on the theory that ISIS’s ability to maintain Twitter accounts and communicate on the service constitutes a material benefit to the group. Twitter does ban a lot of ISIS accounts, but it can’t ban all of them. There are a number of legal defenses, but the core of all of them is the claim that just holding an account on the network doesn’t constitute an endorsement by the company. If you start kicking people out for being white supremacists, that argument starts to look a lot weaker.

Ultimately, Twitter’s hands-off approach to free speech is as much about economics as philosophy. A tech company’s single overpowering motive is to scale, to sell the same product to more and more people until it swallows the world. Media companies can carve out a niche with a particular perspective, but platforms have to be all things to all people. Otherwise, they risk starving to death, or getting drawn into endless civil wars. The hands-off approach to speech lets Twitter adopt a view from nowhere, ostensibly treating all speech as equal in the eyes of the platform.

It’s an appealing logic, but it has its limits. Twitter is finding them now. The company can’t survive with a user base this toxic, and cleaning it up will require a new approach, something far trickier than the machine learning tools suggested so far. It’s urgent, politically delicate work, much harder than winning an election. By all indications, that work has yet to begin.