Skip to main content

Researchers propose a new approach for dismantling online hate networks

Researchers propose a new approach for dismantling online hate networks

/

We’re developing a better understanding of how they work

Share this story

Network stock

How do you get rid of hate speech on social platforms? Until now, companies have generally tried two approaches. One is to ban individual users who are caught posting abuse; the other is to ban the large pages and groups where people who practice hate speech organize and promote their noxious views.

But what if this approach is counterproductive? That’s the argument in an intriguing new paper out today in Nature from Neil Johnson, a professor of physics at George Washington University, and researchers at GW and the University of Miami. The paper, “Hidden resilience and adaptive dynamics of the global online hate ecology,” explores how hate groups organize on Facebook and Russian social network VKontakte — and how they resurrect themselves after platforms ban them.

As Noemi Derzsy writes in her summary in Nature:

Johnson et al. show that online hate groups are organized in highly resilient clusters. The users in these clusters are not geographically localized, but are globally interconnected by ‘highways’ that facilitate the spread of online hate across different countries, continents and languages. When these clusters are attacked — for example, when hate groups are removed by social-media platform administrators (Fig. 1) — the clusters rapidly rewire and repair themselves, and strong bonds are made between clusters, formed by users shared between them, analogous to covalent chemical bonds. In some cases, two or more small clusters can even merge to form a large cluster, in a process the authors liken to the fusion of two atomic nuclei. Using their mathematical model, the authors demonstrated that banning hate content on a single platform aggravates online hate ecosystems and promotes the creation of clusters that are not detectable by platform policing (which the authors call ‘dark pools’), where hate content can thrive unchecked.

In the paper, the authors offer a choice metaphor for platforms’ current approach:

These two approaches are equivalent to attempts to try to understand how water boils by looking for a bad particle in a sea of billions (even though there is not one for phase transitions), or the macroscopic viewpoint that the entire system is to blame (akin to thermodynamics). Yet, the correct science behind extended physical phenomena lies at the mesoscale in the self-organized cluster dynamics of the developing correlations, with the same thought to be true for many social science settings 

So what to do instead? The researchers advocate a four-step approach to reduce the influence of hate networks.

  1. Identify smaller, more isolated clusters of hate speech and ban those users instead.
  2. Instead of wiping out entire small clusters, ban small samples from each cluster at random. This would theoretically weaken the cluster over time without inflaming the entire hive.
  3. Recruit users opposed to hate speech to engage with members of the larger hate clusters directly. (The authors explain: “In our data, some white supremacists call for a unified Europe under a Hitler-like regime, and others oppose a united Europe. Similar in-fighting exists between hate-clusters of the KKK movement. Adding a third population in a pre-engineered format then allows the hate-cluster extinction time to be manipulated globally.)
  4. Identify hate groups with competing views and pit them against one another, in an effort to sow doubt in the minds of participants.

I find these strategies fascinating, even if I wonder how pragmatic they are. (Particularly the latter two, are basically diametrically opposed to existing platform policy.) The first point, which the authors document with a forbidding amount of math, strikes me as the most persuasive. (Another researcher, Natalie Bucklin, recently posted a similar analysis of Twitter’s hate clusters that also found large groups connected by a few big nodes, just as Johnson’s team did.)

Johnson’s team is currently developing software that it hopes will aid regulators and software platforms as they consider new interventions into hate speech.

“The analogy is no matter how much weed killer you place in a yard, the problem will come back, potentially more aggressively,” Johnson said in a press release accompanying the paper. “In the online world, all yards in the neighborhood are interconnected in a highly complex way — almost like wormholes. This is why individual social media platforms like Facebook need new analysis such as ours to figure out new approaches to push them ahead of the curve.”

Still, I can think of many potential problems with the approach identified in the paper. Let’s take them in the order they are proposed.

  1. Removing small clusters of bad actors might be a more productive approach in the long run. But how would platforms justify letting large clusters continue to operate in the interim? If platforms go after small fry while allowing large hate groups to thrive, they’ll be pilloried by users, regulators, and their own employees — and not unfairly.
  2. Banning bad actors at random is anathema to current platform policies, which attempt to develop clear, scalable standards they can apply universally. The researchers are asking platforms to create a less fair system today in the hopes that it will create a more safe system tomorrow — but there will be many valid howls of injustice in the meantime.
  3. With some notable exceptions, it’s not clear to me we understand how to productively engage extremist users on our current social platforms. Recruiting volunteers to do it feels worthy of experimentation — see Charlie Warzel this week on one such effort — but it’s not something I’d bet the platform on today.
  4. Promoting infighting among extremist groups, sapping them of energy they could otherwise use to radicalize new users, seems useful in theory. But where could platforms justifiably allow an exchange of rival schools of hate-speech thinking? That would go against the laudable platform principle of removing hate speech as soon as it’s identified, and I don’t know how you build around that.

As always, it’s trade-offs all the way down.

You may have other ideas about the approaches outlined in this paper, and I’m eager to hear them if you do(casey@theverge.com). But if nothing else, we do seem to be developing a stronger sense of how hate networks propagate and endure, even after their leaders are deplatformed. That’s encouraging news. And with news that the authorities have arrested 27 people over threats to commit mass murder since the El Paso and Dayton shootings alone, insights like these can’t come quickly enough.

Democracy

⭐ Twitter employees coached Chinese government officials and agencies on how to use the service. Earlier this week, Twitter kicked state-controlled media off its advertising platform. And now this, from Shelly Banjo and Sarah Frier report in Bloomberg:

What Twitter didn’t mention in its series of blog posts this week was the increasing number of Chinese officials, diplomats, media, and government agencies using the social media service to push Beijing’s political agenda abroad. Twitter employees actually help some of these people get their messages across, a practice that hasn’t been previously reported. The company provides certain officials with support, like verifying their accounts and training them on how to amplify messages, including with the use of hashtags.

This is despite a ban on Twitter in China, which means most people on the mainland can’t use the service or see opposing views from abroad. Still, in the last few days, an account belonging to the Chinese ambassador to Panama took to Twitter to share videos painting Hong Kong protesters as vigilantes. He also responded to Panamanian users’ tweets about the demonstrations, which began in opposition to a bill allowing extraditions to China.

Twitter also apparently misidentified a 24-year-old student as a Chinese propagandist. (Dave Lee / BBC)

How an American nightclub’s Twitter account was taken over and turned into a Chinese propaganda organ. (Donie O’Sullivan / CNN)

Facebook is still running ads from Chinese state media that attempt to put a happy face on their Muslim concentration camps. (Ryan Mac / BuzzFeed)

The Justice Department’s antitrust chief says it the DOJ is working with state attorneys general on the case. (Makena Kelly / The Verge)

The “study” conservatives keep citing to promote the false idea that Google intentionally manipulated votes, debunked. (April Glaser / Slate)

App developers are raising antitrust concerns about some proposed pro-privacy changes from Apple. (Reed Albergotti and Craig Timberg / Washington Post)

Democratic presidential candidate Michael Bennet stops by The Vergecast to talk about his new book, which explores “how Russia hacked social media and democracy.”

You had me at 21.6 million fake LinkedIn accounts. (Nat Levy / GeekWire)

Elsewhere

⭐ A frankly unbelievable number of aging politicians and celebrities fell for an Instagram copypasta hoax. Ashley Carman reports:

Famous actors and musicians, the head of the US Department of Energy, and regular Instagram users have been spreading a hoax memo that claims the company will soon have permission to make deleted photos and messages public and use those posts against them in court.

The claims are fake and the assertions don’t make a lot of sense, but that hasn’t stopped it from being spread by some major names concerned about the implications. Celebrities including Usher, Judd Apatow, and Julia Roberts posted the note to their feeds, as did Rick Perry, the current United States secretary of energy and former Texas governor. The note and similar ones have been going around since 2012, and this is just their most recent resurgence.

Elsewhere on Instagram, popular creators are using a phone case whose slogan reads “Social media seriously harms your mental health.” (Ashley Carman / The Verge)

YouTube alternatives are popping up as influencers brace for major advertising changes related to the ongoing FTC investigation. (Mark Bergen / Bloomberg)

The Tampa Bay Times talks to content moderators at Facebook’s Tampa site, who tell reporters that conditions have worsened since I published my investigation about the site in June. “Every day we find a new way to threaten them or make them feel like we’re going to fire them,” a team leader tells Kavitha Surana and Dan Sullivan.

Fake releases and phony artists are shooting up the streaming charts. (Noah Yoo / Pitchfork)

And finally ...

YouTube bans robot fighting videos for animal cruelty roughly 10 years too soon

We tend to give platforms a lot of grief around here for not being proactive in their content moderation decisions. So kudos to YouTube for being way ahead of the curve here. Jay Peters reports:

Google, which believes in AI so much it rebranded its Google Research division as Google AI, has begun to side with the robots. On Monday, it was reported that Google’s YouTube took down videos of robots fighting each other (think: BattleBots), saying they violated policies against showing displays of animal cruelty.

When our future robot overlords decide to spare our lives, we’ll have YouTube to thank.

Talk to me

Send me tips, comments, questions, and theories about the spread of extremism online: casey@theverge.com.

The Interface /

Casey Newton’s evening newsletter about Facebook, social networks, and democracy.

Subscribe!