Skip to main content

Two months ago, the internet tried to banish Nazis. No one knows if it worked

Illustrations by Andrea Ucini

On August 11th and 12th, the Charlottesville Unite the Right rally marked a turning point in modern American politics, where far-right groups felt empowered to gather and openly support white supremacy. The event ended with the alleged murder of protester Heather Heyer.

The rally also marked a turning point for the internet. A central rallying point for the white nationalist “alt-right,” the Daily Stormer website was scrubbed from multiple platforms after mocking Heyer’s death. In quick succession, tech companies that long preserved a reputation for neutrality became quick to ban and condemn hate groups, even ones that had operated openly through their services for years.

But almost two months later, has this effort to scrub Nazis off the internet made a difference?


Plenty of platforms banned neo-Nazis and white supremacists before Charlottesville, and even before the August rally, the far-right was losing ground online. Twitter suspended a number of alt-right accounts last November. This summer, Patreon shut down activist Lauren Southern, who joined in an effort to block Mediterranean refugee boats. GoFundMe closed campaigns from far-right celebrities Kyle Chapman and Tim “Baked Alaska” Gionet, citing a rule against “hate or intolerance.” PayPal banned Chapman as well, alongside other groups and figures associated with the alt-right. Even Reddit, which generally takes a hands-off stance, banned two alt-right subreddits in February.

Even before the August rally, the far-right was losing ground online

But the wake of Unite the Right, parts of the far-right also lost access to basic, infrastructural internet services. A few days after the rally, the Daily Stormer was kicked off domain registration systems, including GoDaddy, Google, and Russia, in quick succession, and in a far more unusual move, web services company CloudFlare cut it off from DDoS protection. Web hosting service Squarespace removed an unspecified number of white nationalist sites. Network Solutions, which had hosted white supremacist site Stormfront since 1994, pulled the plug. This came alongside more shutdowns from other platforms. Apple Pay joined PayPal in cutting off sites selling white nationalist merchandise. Uber and Airbnb said they would ban users from hate groups, and Spotify said it would remove a number of “hate rock” bands after Digital Music News spotted them on the service.

Two months later, some of the changes that took place in the days after Charlottesville are still evident. Two of the three sites that petitioners asked Squarespace to drop in August — Richard Spencer’s Radix Journal and National Policy Institute — now redirect to generic Squarespace “claim this domain” space; the third, Identity Evropa, no longer includes Squarespace source code. (Squarespace still appears to host the Foundation for the Marketplace of Ideas, also identified as a hate site.) Spotify has completely removed at least 19 of the 37 alleged hate bands, and several more are either difficult to distinguish from other bands of the same name, or have only one or two songs on the service.

Several of the platforms I reached out to affirmed that they were still on alert for white supremacist content. A Spotify spokesperson said the service was “continuing to monitor for hate music and responding to users who flag tracks for us,” although he didn’t believe it had taken down a large volume of bands since the August crackdown.

“Obviously, there's a microscope on the alt-right and these organizations, but we're not tolerant of violence or perpetuating of that on either side.”

Airbnb spoke more generally, saying it would “seek to take appropriate action” against people who violated the Airbnb Community Commitment, based on background checks or flags from other users. Uber pointed back to its earlier commitment to act decisively against “discrimination of any kind.” Bumble, which announced a partnership with the Anti-Defamation League to detect and remove hate symbols, referred us to its previous statement. (GoDaddy, Stripe, and OkCupid, all of which banned some users and services, didn’t respond to an email requesting comment.)

Discord reported that it was getting more reports of terms of service violations since it explicitly addressed the issue of hate speech, which it said was largely because people better understood how to report content. A spokesperson said the service would “act quickly and aggressively” with any user or server that violated its rules, which include bans on harassment, threats, and calls to violence.

A PayPal spokesperson said that the company was following the same policy as always, which had “no gray area” for white supremacist or neo-Nazi organizations. But he confirmed that there’s been “increased public scrutiny” of possible hate groups using PayPal, if they aren’t already weeded out by its internal vetting process. He also said that this wasn’t limited to right-wing accounts. “Obviously, there's a microscope on the alt-right and these organizations, but we're not tolerant of violence or perpetuating of that on either side.”

Alt-right sympathizers have complained that anti-fascist or “antifa” groups that condone political violence aren’t held to the same standards. Anarchist site It’s Going Down, which regularly publishes anti-fascist material, has in fact been kicked off some of the same platforms; Patreon banned It’s Going Down in July, and PayPal followed in early August. An anarchist community founded its own alternative Reddit — something various alt-right figures have done — after being banned from the site.

But anti-fascists typically don’t court celebrity like alt-right figures, which means there are fewer high-profile users to ban. And where the alt-right repackages fringe politics as a respectable movement, antifa groups aren’t particularly invested in mainstream approval or traditional publicity tactics. Even if tech companies made a point of cracking down on them, there’s simply less to crack down on.


Meanwhile, some platforms have emerged more ambivalent of their new censor role — especially Cloudflare, which described its own ban of the Daily Stormer as “dangerous.” After Charlottesville, Cloudflare called for a transparent process for fairly banning future equivalents of the Stormer. But it hasn’t yet resolved the tension between promoting free expression and stopping harmful activity. CEO Matthew Prince tells The Verge that the company has been having “really fruitful conversations,” but that there’s no consensus on how to establish it. “These are hard issues, and very, very, very smart people get wrapped up in knots on it,” he says.

“These are hard issues, and very, very, very smart people get wrapped up in knots on it.”

While critics have warned of companies instituting de facto web censorship, though, this has been far from a universal de-platforming. GoDaddy still registers Richard Spencer’s AltRight.com, for example, despite banning the Daily Stormer. It’s relatively easy for banned groups to find a new home online, or to come back under a new name. When PayPal and Stripe banned the “alt-tech” crowdfunding site Rootbocks over hate-related campaigns, its founder created a near-clone of the site called GoyFundMe, which was approved for PayPal processing. (Processing was removed at some point after we emailed PayPal about the site.) Even Stormfront, which the SPLC has directly linked to a large number of hate-related killings, recently came back online through the registrar Tucows.

The Daily Stormer, which has had by far the most trouble staying online, has found limited refuge on country-specific domains. It spent a few weeks on Iceland’s .is, which Daily Stormer founder Andrew Anglin chose because of its permissive history, claiming that the ISNIC registry would require a “parliamentary decree” to revoke its registration. But an ISNIC spokesperson told The Verge that this isn’t correct; a court order or ISNIC board decision will also suffice, and its rules require registrants to operate “within the limits of Icelandic law.” Accordingly, Anglin said the domain was “on ice” last week, and it moved to the embattled .cat domain, only to be kicked off shortly thereafter.


The possibility of these shutdowns has thrown a wrench into the loosely defined “alt-tech” movement, whose supporters have focused on the need for parallel apps and sites that cater particularly, though not exclusively, to the far-right. On one hand, they energized the efforts to build alternative social networks and crowdfunding sites with looser moderation policies. On the other, the spectre of losing support from infrastructure providers has cast a pall over promises of “censorship-free” service.

Alt-tech services include alternatives to Reddit (Voat), Patreon (Hatreon), Twitter (Gab), GoFundMe (GoyFundMe), and YouTube (BitChute), and nearly all promote “free speech” as a core reason for their existence — in contrast, creators say, with the restrictive rules of mainstream platforms. While no major alt-tech sites were knocked entirely offline after Unite the Right, enough have had trouble with payment processors and registrars to make users nervous.

Gab suspended Andrew “weev” Auernheimer, who had encouraged a modern-day Oklahoma City Bombing to end censorship

The most obvious example of this has been social network Gab, which has faced an internal moderation crisis even as it’s gotten a boost from post-Charlottesville crackdown publicity. While Gab has promised “free speech within the legal limitations of the law,” it raised users’ hackles by removing a joke from Anglin, under pressure from its domain registrar Asia Registry. A few weeks later, Gab suspended hacker and Daily Stormer writer Andrew “weev” Auernheimer, who had encouraged a modern-day Oklahoma City Bombing to end censorship and teach Jews “a lesson.” In the same post, it announced that Asia Registry was banning it for violating company hate speech policy and Australian anti-discrimination law. Around the same time, it faced legal trouble for the exact opposite reason, as far-right science fiction author Vox Day sued Gab over content he claimed was defamatory.

Gab defended its takedowns as logically consistent with the site’s terms of service, not just bending to pressure. CEO Andrew Torba said that Anglin had failed to appropriately tag his post as “not safe for work,” and that Auernheimer violated Gab’s rules against threats and terrorism. (Gab even sent news of the ban to reporters, saying that a “major far-right user” had been suspended.) But other sites have more openly admitted that they’re compromising principles to stay online. YouTube alternative BitChute justified taking down another Daily Stormer writer’s “satirical” call for race war by saying it could make domain registrars deny service. (BitChute disputes this characterization, telling The Verge that the decision was following the letter of its terms of service.) GoyFundMe explicitly reserves the right to remove any project that might threaten the site’s relationship with payment processors.

“WECANN,” an ambitious multi-pronged project that includes a registrar, app store, and advocacy group, is still a one-page website

These are the two biggest weak points for alt-tech platforms, and so far, nobody’s released a full-fledged ecosystem for internet separatists. There’s a proposal for a “definitive free speech registrar” named Zyniker Domain Services, which is currently seeking donations, and a blockchain-based server called BNS that would resolve both official and decentralized domain names. “WECANN,” an ambitious multi-pronged project that includes a registrar, app store, and advocacy group, is still a one-page website. Sites can take donations through cryptocurrency, but exchange companies like Coinbase can still refuse service — a move that effectively shut down Rootbocks’ crowdfunding efforts.

Gab, meanwhile, has taken the opposite tack and legally appealed its marginalization. After Google banned Gab’s Android app from the Play Store in the days after Charlottesville, the company sued it for violating antitrust rules, alleging that Google felt threatened by its growth. The filing claims Google intentionally blocked Gab’s app to prevent it from competing with Google+ or Twitter, which Google signed a search results deal with in 2015, and positions Gab as an essentially mainstream platform. But it’s still looking at alternatives to traditional tech infrastructure, including, as Torba recently stated, decentralized payment processing and “democratic enforcement” of norms.

After losing Asia Registry, Gab quickly found a new domain registration provider, although its identity is being kept secret. But some alt-tech projects’ progress seems to have been dramatically slowed by Charlottesville. Former Business Insider CTO Pax Dickinson pushed back the launch of Counter.Fund, a combination crowdfunding platform and political party, because it was “abundantly clear” that it would not be able to find a steady infrastructure provider. While the service could launch as an underground platform, “financially speaking it’s a disaster,” he wrote. “@CounterFund was already a long-shot venture investment-wise. There's no money for this.”


But the larger question of whether the efforts of companies and platforms to scrub themselves clean of alt-right users have been effective still stands. Does making hate harder to access measurably reduce its spread, or minimize damage to the targets? And even putting aside free speech issues, is it really possible to create an internet without Nazis?

A recent study suggests that, to some extent, targeting radical users on a single platform can help control their behavior. Researchers at Georgia Institute of Technology, Emory University, and the University of Michigan recently published an analysis of two banned Reddit forums: FatPeopleHate and CoonTown, one of the “Chimpire” subreddits. They found that after the shutdown, users of both subreddits dramatically reduced their use of hateful terms if they moved to other forums — rather than simply finding another outlet. The research is early and broad, but as lead author Eshwar Chandrasekharan told The New York Times, “Banning places where people congregate to engage in certain behaviors makes it harder for them to do so.”

 “Banning places where people congregate to engage in certain behaviors makes it harder for them to do so.”

That said, the study didn’t track how many people may have continued hateful behavior on other sites like Voat, which launched in direct response to those Reddit bans. But users can no longer use those platforms to propel hateful posts into Reddit’s massive community, where they could pick up new visitors. Reddit CEO Steve Huffman claimed at one point that allowing hate subreddits provokes valuable debate, but there’s not much evidence that non-hateful Reddit users were showing up and changing minds on hate subreddits. Even on a mainstream platform, hate groups operate in a bubble.

Shutting down a site or user guarantees a measure of publicity; both Torba and Anglin have bragged that their respective platforms were boosted by being banned. But beyond news coverage from sites that are largely critical, the odds of randomly stumbling across an actual Daily Stormer article are low. The Icelandic domain’s Alexa ranking was around 136,000 globally, and the Catalonian one is at 368,000, compared to the old domain’s rank in the 13,000 range as of April 2017. Around 8.3 percent of visitors came straight from YouTube, while Google and Gab each directed 7.5 percent to the site.

Jonathan Greenblatt, CEO of the Anti-Defamation League, believes platforms are sending a meaningful message to hate groups. “You've got a powerful statement by public companies about what's acceptable and what's not acceptable,” he says. “They're demonstrating that they're going to double down on this idea that their products and their services shouldn't be exploited and manipulated by these people.”

However, both Greenblatt and Keegan Hankes of the Southern Poverty Law Center caution that not much time has passed since Charlottesville. “It's still almost too early to tell” the long-term effects of this crackdown, says Hankes, because “the hate groups themselves are still picking up the pieces. I don't think they've quite figured out what it is they're going to do, and how they're going to respond.”

It’s also too early to say whether companies have definitively changed their moderation policies, or if we’ve seen all the ways that hate speech can manifest online. Last month, ProPublica found that Facebook inadvertently let advertisers target categories that included “jew hater” or “how to burn jews,” automatically generated by an algorithm based on user interests. Facebook restricted the categories, but Google and Twitter turned out to have similar options, letting people run ads against specific hateful phrases or people “likely” to engage with racial slurs. Google and Twitter also later apologized and promised they’d banned the offending options.

Greenblatt, however, still sees this new crisis as a sign of progress. “People were able to identify this kind of [content] in their ad platforms, and they immediately reacted,” he says of Google, Facebook, and Twitter. “They didn't say, ‘This is okay.’ They didn't say, ‘This is part of the price we pay for the First Amendment.’ They said, ‘This is inappropriate on our platforms,’ and they dealt with it right away. I don’t know if they would have done that a few years ago.”

Update December 6th: Added statement from BitChute.