Skip to main content

Questions about policing online hate are much bigger than Facebook and YouTube

Questions about policing online hate are much bigger than Facebook and YouTube

Share this story

Police Raid Property Connected To Christchurch Mosque Terror Attack
Photo by Dianne Manson/Getty Images

In the wake of a hate-fueled mass shooting in Christchurch, New Zealand, major web platforms have scrambled to take down a 17-minute video of the attack. Sites like YouTube have applied imperfect technical solutions, trying to draw a line between newsworthy and unacceptable uses of the footage.

But Facebook, Google, and Twitter aren’t the only places weighing how to handle violent extremism. And traditional moderation doesn’t affect the smaller sites where people are still either promoting the video or praising the shooter. In some ways, these sites pose a tougher problem — and their fate cuts much closer to fundamental questions about how to police the web. After all, for years, people have lauded the internet’s ability to connect people, share information, and route around censorship. With the Christchurch shooting, we’re seeing that phenomenon at its darkest.

New Zealand blocked sites spreading hate material after the shooting

The Christchurch shooter streamed video live on Facebook and posted it on other platforms, but his central hub was apparently 8chan, the image board community whose members frequently promote far-right extremism. 8chan had already been booted from Google’s Search listings and kicked off at at least one hosting service over problems with child pornography. (8chan’s owner claims the site “vigorously” deletes child porn.) After the shooting, some users posted comments speculating that the site would be taken down. Forbes later raised the question of somehow shuttering 8chan, and in New Zealand, internet service providers actually did block it and a handful of other sites.

The past couple of years have seen a wave of deplatforming for far-right sites, with payment processors, domain registrars, hosting companies, and other infrastructure providers withdrawing support. This practice has scuttled crowdfunding sites like Hatreon and MakerSupport, and it’s temporarily knocked the social network Gab and white supremacist blog The Daily Stormer offline.

Companies that aren’t traditional social networks still have systems for scrubbing objectionable content. One user on 8chan’s subreddit pointed readers toward a Dropbox link with the video, but a Dropbox spokesperson told The Verge that it’s deleting these videos as they’re posted, using a scanning system similar to the one it uses to detect copyrighted work.

Decentralization has been a key element of the internet

It’s hard to take a site down permanently, though, thanks to the plethora of companies providing these services — an element of the open web that’s generally considered a good thing for the ways it removes traditional gatekeepers. The Daily Stormer quietly came back online after several bans, and Gab received very public support from a Seattle-based domain registrar. There are also decentralized protocols designed specifically to keep content online. As of this afternoon, the troll haven Kiwi Farms was linking to a BitTorrent file of the video — something that doesn’t require hosting on any kind of central platform.

Infrastructure companies can be more reticent to get involved with content policing than Facebook or Twitter. Cloudflare, which helps protect sites against denial-of-service attacks, has explicitly taken a hands-off approach. “We view ourselves as an infrastructure company on the internet. We are not a content company. We don’t run a platform or create content or suggest it or moderate it. And so we largely view our point of view as one driven by neutrality,” says Cloudflare general counsel Douglas Kramer.

Kramer compares Cloudflare policing content to a truck driver making editorial decisions about what a newspaper prints before transporting it. Cloudflare complies with court orders and won’t deal with companies on official sanctions lists. In one high-profile incident, the company also banned The Daily Stormer for suggesting that Cloudflare had endorsed its white supremacist ideology and harassing critics who filed abuse reports. “They were pretty unique in their behavior,” Kramer adds, and the company hasn’t dealt with a similar case since then.

In a breakdown of its policies published last month, however, Cloudflare urged countries to develop mechanisms for fighting “problematic” material online — arguing that despite concerns about preserving freedom of speech and due process, governments have a kind of legitimacy that web platforms making unilateral decisions do not.

Would legal changes seem more legitimate than company decisions?

Even without new laws, we could potentially see country-wide blocks on 8chan and similar sites. But that would be an extreme measure that would give either ISPs or governments a huge amount of power over the internet. (In the US, Verizon did briefly ban 8chan’s predecessor 4chan in 2010, but it was supposedly related to a network attack, not 4chan’s content.) There’s a big gap between making somebody leave Twitter or Facebook and start their own website, and exerting control over everything that can be seen or posted on the web.

And it’s possible that a site like 8chan would be mostly quarantined if larger social media networks scrubbed site links and official accounts, making it harder to get traffic like a recent boost from game studio THQ Nordic, which promoted an 8chan AMA on its Twitter account last month. That would be controversial — but far less so than trying to completely pull a site or a piece of information offline for good, especially without having some hard conversations about how we want the internet to work.