Skip to main content

Discord’s new policy will ban harmful medical advice, taking aim at anti-vaxx groups

Discord’s new policy will ban harmful medical advice, taking aim at anti-vaxx groups

/

Discord is overhauling its policies

Share this story

Illustration by Alex Castro / The Verge

Discord is overhauling its policies and community guidelines to tackle health misinformation, off-platform behavior, and hate speech, the company announced on Friday. It’s the first big policy update for Discord in nearly two years and is designed to target groups or individuals that participate in organized violence, spread harmful anti-vaccination material, or harass other Discord users with hate speech.

After two years of a pandemic, Discord is making its policies more clear about health misinformation. In new community guidelines, which go into effect on March 28th, Discord says users “may not share false or misleading information on Discord that is likely to cause physical or societal harm.” This includes content that could result in damage to physical infrastructure, injury to others, and content that endangers public health.

The change is clearly targeted at anti-vaxxers who post and promote severely misleading health information — but it doesn’t mean all anti-vaxxers will be removed from the platform.

“If someone posts on Discord ‘drink four ounces of bleach and your body will be rid of coronavirus,’ that’s actionable,” explains Clint Smith, chief legal officer at Discord, in an interview with The Verge. “Medical consensus does not support that as a treatment for COVID, and there’s a high likelihood of harm for anyone that follows that advice.”

Not all anti-vaccination content will disappear

However, Smith says if someone posts about holding “crystals against your chest for 5 minutes and your lung capacity will improve,” in the context of the pandemic, that’s not something Discord is going to take action against. “Crystals and the healing power of crystals is not supported by medical consensus, but there’s also a very low risk of harm to our users, so that statement about crystals, we’re not going to action it,” says Smith.

In deciding when to take action, Discord says it will weigh up the context of messages, the likelihood of direct harm, a poster’s intent to convince or deceive others, and whether a group or poster has a history of repeated violations of Discord policies. Discord users will be able to continue to report this type of content by right-clicking on any message on the service and hitting the report button.

Similarly, Discord will also penalize users for harmful behavior they engage in outside the platform. Relevant behavior includes membership in violent organizations, making credible threats of violence toward people, organizations, events, or locations, and belonging to groups that sexualize children in any way.

Discord is limiting off-platform behavior to the “highest harm categories” of organized violence and sexualizing minors. “If a Discord user is charged with drug possession... or implicated in cheating around a school exam, those aren’t the highest harm off-platform behaviors that we’re going to take into account,” explains Smith. “We’re going to focus on the off-platform behaviors that have the highest harm potential for our users and someone’s Discord experience.”

A new public server tag in the top-left of Discord servers.
A new public server tag in the top-left of Discord servers.
Image: Discord

Research published by the Institute for Strategic Dialogue last year revealed that far-right groups have exploded on platforms like Discord and Steam. These groups can organize real-world violence or participate in harassment and raiding on Discord servers. Discord has been shutting these types of groups down for years, but they’re persistent. New groups appear regularly, and even social protest groups can organize on platforms like Discord or Facebook and then, eventually, cause real-world violence. Discord is throwing data science and machine learning algorithms at the problem, but the company is relying heavily on outside experts and others to report extremist content.

“This is an area where we’re investing heavily in building relationships with experts and trusted reporters,” explains Smith. “There are people outside of Discord who are truly expert at what groups are promoting violence and hate, and they can alert us to the types of groups that we should be banning on Discord.”

Smith says the company has consulted with academics, groups like the Anti-Defamation League and the Southern Poverty Law Center, and journalists from trusted outlets. Discord also invests in paid consultants with specific expertise that the platform doesn’t have on its own staff.

Discord takes a decentralized approach to moderation, setting it apart from platforms like Facebook. Instead of proactively scanning for content violations, it relies on individual servers to moderate themselves and escalate content to company moderation when necessary. I always liken Discord to my local pub, as it plays host to a wide range of groups. We wouldn’t expect the landlord of a pub to police all of those different groups of conversations, but you’d expect them to ban groups based on reports. Discord’s model means it’s more difficult to spread misinformation on the service, but it’s ideal for groups to organize in private without any obvious oversight and gather in numbers that you just can’t do offline.

Discord has been trying to remove bad actors for years

Discord is also highlighting public servers with a new tag in the client soon, just to make the definition between private and public servers a little clearer. Discord is clearly trying to straddle the line between platform management and privacy, too.

“You could imagine that we’ll do more machine learning from a safety perspective on conversations that are happening in large public communities, but we won’t be doing that on private DMs and private servers,” says Smith. “If there’s a high expectation of privacy in a small group DM, we intend to honor that expectation of privacy.”

Discord is also targeting hate speech by expanding its protected attributes to caste, gender identity, age, serious illness, and more. The terms of service, privacy policy, and community guidelines also include more regular English than legal language, which is often difficult for users to comprehend. Terms like NSFW have also been dropped in favor of age-restricted, to make it easier for Discord users in countries where such terms aren’t used widely.

Most of these key changes stem from how much the world has changed over the past few years and how Discord has evolved to target a much broader market. Discord has been raising cash in an ambitious effort to move beyond gaming, and the service continues to attract a diverse mix of communities as a result.

“In 2020 we were a band of Californians serving a gaming-oriented audience,” jokes Smith. “In 2022, we have employees across the US, in Toronto, London, and Amsterdam, and we have a truly global and broad audience that goes well beyond gaming.”