Skip to main content

It’s not just Logan Paul and YouTube — the moral compass of social media is broken

The ethics of online speech are contextual, and it’s time to act like it

We’re only a few days into the new year, but it didn’t take long for the latest viral embarrassment to hit YouTube, as yet another popular, telegenic young man posted something reckless and offensive on the video-sharing platform. This time around, YouTube star Logan Paul shared a video where he discovered and awkwardly laughed at the corpse of a suicide victim in Japan’s Aokigahara forest.

“Bro, did we just find a dead person in the suicide forest?” he says in the now-deleted video.

Although Paul subsequently issued apologies, the callous stunt was just the latest in a string of incidents where popular YouTubers have posted jaw-droppingly offensive, prejudiced, or unethical content that would never pass muster at a traditional outlet.

Their behavior is enabled by YouTube’s design as an effectively accountability-free platform, particularly for its most popular, envelope-pushing stars. There are rules and community guidelines about “disgusting” content and hate speech, of course, but they’re enforced haphazardly, often with little context or transparency, and can be easy to circumvent.

It’s a problem that extends beyond YouTube as a platform to streaming and social media at large, where large platforms tiptoe around the sensibilities of loud, angry users at the expense of anyone they can sacrifice on their pyre of rage. It creates a situation where women, people of color, queer, and disabled people all lack equal access to the service, laboring under the added burden of an angry mob scrutinizing their every move, even when they’re not “famous” by any metric.

The idealistic dream these services sell to users — that anyone can be famous with a mic, a keyboard, a webcam, and a bit of elbow grease — sounds like the culmination of early cyber utopianism. But in practice, it often means elevating people to fame when they are wildly unprepared for the ethical responsibilities or consequences of broadcasting their content to millions of fans (including children) around the world. As a principle, it means companies tie their own hands when dealing with edgelords who think Nazism is cool; there are only empty platitudes about free speech to be found in their wake.

The internet’s lawlessness came about as a feature, not a bug

The internet’s lawlessness came about as a feature, not a bug, premised on a libertarian ideal of self-direction unfettered by systems. Everything would be permitted, the thinking went, and cyber-society would simply balance itself out automatically without the need for oppressive governments or organizational rules. It hasn’t worked out that way, to put it lightly. The scope has also changed dramatically since the early days of the internet: the voices the internet amplifies are no longer niche or cordoned off from the “real” world. Social media celebrities can reach tens or hundreds of millions and sometimes have more impact on their viewers than television or film stars. They’re not just influential: for many people, particularly younger users, they are the media — and they can do pretty much whatever they want.

There are two sides to this, of course. One great boon of the internet, particularly for marginalized voices, is how it allows people to share content and ideas that might never make it past old-school gatekeepers and censors. At times, it can be refreshing and enlightening to see media and perspectives that don’t labor under stultifying FCC obscenity codes, to hear voices we might not have otherwise heard. But as we’ve seen over and over again for many years, this is an increasingly sharp double-edged sword.

The vile content that is amplified through digital megaphones is a reminder of why ethics and standards can be valuable, especially for the platforms that project the loudest voices in our culture. The solution, however, is not merely to clutch our pearls and demand that social media “think of the children,” but rather to implement clear, contextual codes of conduct with transparent enforcement that is tailored to the distinctions of every case, including human oversight at every stage. The latter is important. Without it, we run the risk of employing automated systems that reproduce biases at light speed.

For a choice example, we need only look at a November incident involving the professional Twitch streamer aptly called Trainwrecks, who is a member of the platform’s Partner program. In a fit of entitled pique, he decided to stream a misogynistic rant about so-called “boobie streamers,” or female content creators on the platform who wear revealing clothing or inject elements of sexiness into their game streams. It described, in profane detail, his rage at the growing presence of these women in what his perceived as “his” community, and what he believed they were taking away from him:

“This used to be a goddamn community of gamers, nerds, kids that got bullied, kids that got fucked with, kids that resorted to the gaming world because the real world was too fucking hard, too shitty, too lonely, too sad and depressing…[Twitch now belongs to] the same sluts that rejected us, the same sluts that chose the god damn cool kids over us. The same sluts that are coming into our community, taking the money, taking the subs, the same way they did back in the day.”

It’s no secret that male-dominated online communities are often benighted by a invidious and sexist mythology, one that says their specific corner of internet culture — and particularly nerd culture — is their exclusive domain, a refuge from the real world with no room for the evil girls who rejected and bullied them in school. The women who do make their way into these spaces are trespassers and thieves who are “taking” everything away again, using their sexy wiles to steal men’s rightly earned status and money (via ad revenue and subscriptions). This is the mentality that rules social media in the absence of meaningful enforcement, the entitled anarchy that rushes into the Wild Western void.

This is the mentality that rules social media in the absence of meaningful enforcement

And like any gaping black hole, it’s never satisfied. “No matter what I fucking wear, there’s always a comment. There’s always someone calling me a titty streamer, fake gamer or a whore etc,” said female Twitch streamer ZombiUnicorn on her Twitter. “If all the titty streamers were gone tomorrow, does anyone really think shitty people would stop degrading and insulting women?” tweeted streamer Renée Reynosa. “Truth is, they’d just find another hoop for us to jump through.” That’s the key here: women and other minorities face backlash no matter what they do, how they act, how they dress, or what they say. The solution, then, shouldn’t involve regulating the behavior of the people who suffer the most abuse online, or enabling the people who inflict it.

It’s a dilemma every social media platform confronts: cave to the angry fanbases of popular users who want unfettered license to do as they please? Or try more expensive, involving, context-based moderation techniques that uphold the principle that no one is too big to ban?

For the moment, Twitch’s approach to this problem has mostly been one of monastic silence, in its own way validating the entitled complaints of men like Trainwrecks — who was banned for just five days and remains an active streamer on the platform. YouTube also remains loathe to take serious action, even as LGBT streamers have grappled with a year where their videos were effectively hidden by the company in a misguided attempt to automate moderation.

The automoderation craze, once trumpeted as an elegant solution, is now part of the problem. These companies want the PR boost from appearing to “do something” while implementing faster, cheaper systems that can’t distinguish between a trans YouTuber talking about gender identity and a Nazi inciting violence. These systems are also notoriously exploited by corporations and harassers alike to get critical videos taken down or demonetized.

There is some promise in AI moderation — jerks on the internet aren’t the most original folks, after all; there are patterns — but it requires considerably more diverse human hands at the wheel. Twitch’s AutoMod system debuted with great fanfare, but has made few strides in cleaning up hate in live-chat. Without human insight, it cannot grapple with the ever-evolving nature of online hate that dwells in double meanings, memes, and in-jokes. Community moderation is not obsolete. It’s a human skill that is needed now more than ever.

It’s tempting to take the easy way out — technologically, financially, and morally. Automoderation is simple and cost-effective. Catering to “both sides” gives the appearance of fairness. This only compounds problems of access and platform equality, however, and caving into the moral panics of a few angry users only serves as a distraction from the larger problems facing social media and streaming sites. It won’t in any way put an end to the embarrassing PR debacles that have consumed Twitter (which is now often criticized as a Nazi-friendly site), or YouTube (which has been embarrassed by one failing after another, from PewDiePie to Paul to disturbing videos aimed at children).

All of these platforms have rules, moderators, enforcement teams, and even researchers dedicated to improving safety — I’ve met some of them — but it seems like every social media company, from Reddit to Twitch to Twitter, is still overwhelmed by the explosive scale achieved by their platforms and breathlessly trying to catch up by automating as much of the process as possible. That would be a mistake. Going back to basics and strengthening their core values around this issue with human assistance is the necessary first step.

It boils down to a basic question: what is the harm being perpetrated by a user’s actions?

In addition to clarifying their moderation policies, these platforms should also engage in a bit of moral education: make it clear, in fearless terms, why someone was suspended or banned, and what behaviors contributed to it. Just as critically, they need to recognize the importance of judging the impact of a streamer’s alleged misdeeds.

For platforms like YouTube and Twitch, and indeed social media in general, codes of conduct should boil down to a basic question: what is the harm being perpetrated by a user’s actions? For instance, while one group of streamers — the women of Twitch — stood accused of a fundamentally victimless crime, Trainwrecks and his misogynistic brethren espoused views and took actions with material repercussions for the women they targeted. The same goes for PewDiePie’s “joking” Nazism, including the “gag” where he paid Indian freelancers to hold up a sign reading “Death to All Jews.” (These men later said they didn’t understand what the sign meant, and lost their jobs over it.) The ethics around issues of online speech are contextual, and it’s time to act like it.

The playful universe of online streaming has much to recommend to it. Subjecting it to the same sort of strict broadcasting codes devised when the radio was the must-have gadget of the season seems unwise and counterproductive. But platform holders have to stop treating their users like someone else’s wayward children and enforce some standards — especially where their most popular streamers are concerned. If YouTube wants to be the next broadcast network, it’ll have to act like it. Meanwhile, it should also resist the temptation to stifle the creativity and diversity of others just because a few loud, hateful people have called for their sanction.

This would be a solution in search of a problem. Worse, it would hand a victory to the very people whose poisonous attitudes are the real threat to these platforms — assuming sites like Twitch, Twitter and YouTube want to be something more than toys for kids (large and small) who can’t be told “no.” If social media platforms want to make good on the promise of a digital democracy where traditional power structures don’t stifle us all, they will have to confront the ways in which their haphazard approach has built as many walls to speaking freely as it has taken down.