Skip to main content

Why is YouTube being accused of censoring vloggers?

Why is YouTube being accused of censoring vloggers?

/

Nothing has changed, but everything is terrible

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

YouTube Copyright Trademark (STOCK)

A change in YouTube’s content moderation system has left many video creators uncertain of their place on the platform. Over the past day, users have been posting notices from Google, saying that certain videos were being barred from making money via YouTube’s ad service. More troublingly, the videos were often flagged for reasons that seemed unfair, unclear, or outright censorious. But statements from YouTube suggest that the real problem isn’t a new policy — it’s a long-running conflict between the platform’s stated rules and their opaque, clumsy execution.

The flaggings started getting widespread attention after YouTube host Philip DeFranco posted a video called "YouTube Is Shutting Down My Channel and I'm Not Sure What To Do." While the title was hyperbole, DeFranco said on Twitter that over a dozen of his videos had been flagged as inappropriate for advertising, including one dinged for "graphic content or excessive strong language." While saying that YouTube was within its rights to make the call, he described the system as "censorship with a different name," because "if you do this on the regular, and you have no advertising, it’s not sustainable."

"If you do this on the regular, and you have no advertising, it’s not sustainable."

Many other YouTubers had posted their own notices, which covered everything from LGBT history videos to skincare tutorials. Sometimes, they were simply bemusing. Sometimes, they seemed to make serious political discussion or reporting non-monetizable. One notice, sent to the popular Vlogbrothers channel, covered both "Vegetables that look like penises" and "Zaatari: thoughts from a refugee camp." But a YouTube representative soon responded to DeFranco on Twitter. "No policy change here," they wrote, "just an improved notification process to ensure creators can appeal."

From what we can tell, that’s right. Long before today, YouTube’s FAQ included a set of best practices for "advertiser-friendly content." Ad-friendly content is "appropriate for all audiences," and "has little to no inappropriate or mature content," according to both current guidelines and an archived page from 2015. But the range of inappropriate content is broad. It includes profanity, "promotion of drugs," violence or "display of serious injury," and sexually suggestive material. It also includes "controversial or sensitive subjects and events, including subjects related to war, political conflicts, natural disasters and tragedies." Until now, though, it appears to have been mostly ignored — possibly because many people didn’t even know it existed.

YouTube gave a longer explanation to Kotaku, which covered the saga in detail earlier today. A representative said that the change was supposed to make sure creators knew when a video wasn’t ad-friendly, and offer them an appeals process that could get it reinstated. "The idea was to make this information accessible more easily, which is why some YouTubers felt that they ‘suddenly’ got a bunch of flagged videos at once," wrote Kotaku’s Patricia Hernandez.

A forum post from this afternoon is even more explicit. "We did not change our policy of demonetizing videos that may not be appropriate for Google’s brand advertisers," it reads. "Nor have we changed how these policies are enforced." A YouTube spokesperson told The Verge that the flags would previously have been listed in the analytics, but a new, prominent icon in the video manager would have made it easier to see. While the email notifications would have been sent for relatively recent videos, others had probably already been demonetized for some time. Ironically, the emails were meant to make it easier to get ads reinstated, not signal a crackdown.

"We understand that high-quality content isn't always sanitized."

As Kotaku also notes, the rules leave a lot of leeway for advertising on supposedly inappropriate content. A video can still be advertiser-friendly if "there may be inappropriate content, [but] the context is usually newsworthy or comedic where the creator’s intent is to inform or entertain, and not offend or shock." One section explicitly addresses concerns that it’s suppressing newsworthy topics. "We understand that high-quality content isn't always sanitized, especially when it comes to real-world issues," it reads. "If your video has graphic material in it, you can help make it advertiser-friendly by providing context." And videos do seem to be getting reinstated: Vlogbrothers brother Hank Green said that monetization was brought back to the refugee video as soon as they complained.

But the system still raises the same concerns as YouTube’s copyright infringement apparatus, which has been criticized as imprecise, overzealous, and kafkaesque. In a world where YouTube employees could evaluate every video individually, a policy that nixed advertising on videos that deliberately "offend or shock" would be understandable, if not uncontroversial. In reality, the platform relies on both community flagging and automated filters that evaluate a video’s tags, titles, and visual imagery.

This suggests that in many cases, YouTube will end up putting an algorithm in charge of separating exploitation videos from harmless, or even deeply thoughtful, treatments of difficult topics. If it errs on the side of caution while stripping monetization first and making users appeal to get it back, it has a chilling effect on people who are doing things YouTube explicitly allows and supports — but get caught in the dragnet anyway. Combined with the very real power imbalance between YouTube and its users, it’s a recipe for paranoia and assumptions of bad faith, especially during a larger conversation about how web platforms should police their users.

On top of everything else, the controversy gives ammunition to people who believe that platforms like YouTube are deliberately crafting some kind of saccharine liberal dystopia. "Since when did a load of femenazis [sic] start running youtube??" wrote one user on Twitter. "People should be allowed to post any content they like." Never mind that nobody is claiming they can’t post content, YouTube’s written policies actually protect some of the material that’s being stripped of ads, and things like LGBT history aren’t traditionally a target of the sinister feminist agenda. It’s all boiled down to a nebulous accusation of censorship, because that’s where mass-automating subtle moral and aesthetic judgments will get you.

It doesn't help that the guidelines make the policy sound far stricter than it appears to actually be. Over the past day, YouTube has repeatedly said that it's not changing the kinds of videos that will be demonetized, so the vast majority of the platform's vulgar, suggestive, or sensitive material has already faced the test and passed. The rules, a spokesperson told us, are broad because they need to cover a lot of specific, idiosyncratic cases. But in a vacuum, seeing things like profanity and "sensitive subjects" labeled inappropriate gives the impression that YouTube just hasn't gotten around to flagging everything yet — especially when social networks like Facebook and Twitter often do have clear content policies that are inconsistently enforced.

Unsurprisingly, on Twitter, some people are gleefully declaring YouTube finished. The hashtag #YouTubeIsOverParty is a combination of specific complaints over the policy, general anger at the platform, swipes at "political correctness," and plenty of reaction gifs and videos. But while YouTube doesn’t seem to have anticipated today’s backlash, this change is arguably doing exactly what it was supposed to: shed light on an ill-understood and ill-implemented policy, in hopes of making it better.

Update 5:45PM ET: Updated with statement from YouTube spokesperson.