Skip to main content

How extremism came to thrive on YouTube

How extremism came to thrive on YouTube

/

Executives ignored the problem until it was already a key pillar of the platform

Share this story

The YouTube logo against a black background with red X marks.
Illustration by Alex Castro / The Verge

A system built to attract the maximum amount of user attention succeeds beyond all expectation, only to wind up promoting dangerous misinformation and hate speech around the world. It’s a story we have considered often in the context of Facebook, which has responded to the criticism with a promise to change the very nature of the company. And it’s a story we have not discussed nearly enough in the context of YouTube, which has promoted a similarly disturbing network of extremists and foreign propagandists and has tended to intervene cautiously and with underwhelming force.

Certainly YouTube has received its share of criticism since the broader reckoning over social networks began in 2016. Google CEO Sundar Pichai was compelled to answer questions about the video giant when he appeared before Congress last year. But we have generally had little insight into how YouTube makes high-level decisions about issues surrounding its algorithmic recommendations and its inadvertent cultivation of a new generation of extremists. What do employees say about the phenomenon of “bad virality” — YouTube’s unmatched ability take a piece of misinformation or hate speech and, using its opaque recommendation system, find it the widest possible audience?

In a major new report for Bloomberg, Mark Bergen begins to give us some answers. Over nearly 4,000 words, he outlines how YouTube pursued users’ attention with single-minded zeal, quashed internal criticism, and even discouraged employees from searching for videos that violate its rules, for fear it would cause the company to lose its safe harbor protections under the Communications Decency Act. As late as 2017, YouTube CEO Susan Wojcicki was reportedly pushing a revamp of the company’s business model to pay creators based on how much attention they attracted — despite mounting internal evidence that these engagement-based metrics incentivize the production of videos designed to outrage people, raising the risk of real-world violence.

Bergen reports:

In response to criticism about prioritizing growth over safety, Facebook has proposed a dramatic shift in its core product. YouTube still has struggled to explain any new corporate vision to the public and investors – and sometimes, to its own staff. Five senior personnel who left YouTube and Google in the last two years privately cited the platform’s inability to tame extreme, disturbing videos as the reason for their departure. [...]

YouTube’s inertia was illuminated again after a deadly measles outbreak drew public attention to vaccinations conspiracies on social media several weeks ago. New data from Moonshot CVE, a London-based firm that studies extremism, found that fewer than twenty YouTube channels that have spread these lies reached over 170 million viewers, many who were then recommended other videos laden with conspiracy theories.

Bergen’s story is, in a way, a mirror of the New York Times’ November story on how Facebook first ignored, then sought to minimize warning signs about the platform’s unintended consequences. Both pieces illustrate the ugly fashion in which our social networks have developed: Phase one is an all-out war to gain user attention and build an advertising business; phase two is a belated effort to clean up the many problems that come with global scale faster than new ones can arise.

Like Facebook, YouTube has begun to address some of the concerns raised by those departed employees. Most importantly, in January the company said it would stop recommending what it calls “borderline content” — videos that come close to violating its community guidelines, but stop just short. Last year, it also began adding links to relevant Wikipedia entries on some common hoaxes, such as videos declaring that the Earth is flat.

At South by Southwest, before announcing the Wikipedia feature, YouTube CEO Susan Wojcicki compared the service to a humble library — a neutral repository for much of the world’s knowledge. It is a definition that attempts to cast YouTube as a noble civic institution while misrepresenting its power — most libraries do not, after all, mail members a more radical version of the book they were just reading as soon as they finish the last one.

One extremist who has used the platform nimbly over the past several years is Tommy Robinson, a far-right activist who formerly led an Islamophobic, anti-immigration organization in the United Kingdom. Robinson’s anti-Islam posts were sufficiently noxious to get him banned last week from Instagram and Twitter. YouTube decided today to let him keep his account and his 390,000 subscribers, Mark DiStefano reports:

While YouTube is stopping short of an outright ban, the restrictions will mean Robinson’s new videos won’t have view counts, suggested videos, likes, or comments. There’ll be an “interstitial,” or black slate, that appears before each video warning people that it might not be appropriate for all audiences.

Robinson will also be prevented from livestreaming to his channel. 

These tools may remind you of Pinterest’s approach to anti-vaccine misinformation, which I wrote about in February. Robinson will get his freedom of speech — he can still upload videos — but will be denied what Aza Raskin has called “freedom of reach.” It’s an approach I generally favor. And yet I still shudder at another revelation from Bergen’s report — that an internal YouTube tool built by one dissident showed that far-right creators like Robinson have become a pillar of the community:

An employee decided to create a new YouTube “vertical,” a category that the company uses to group its mountain of video footage. This person gathered together videos under an imagined vertical for the “alt-right,” the political ensemble loosely tied to Trump. Based on engagement, the hypothetical alt-right category sat with music, sports and gaming as the most popular channels at YouTube, an attempt to show how critical these videos were to YouTube’s business

And while some of YouTube’s initiatives to reduce the spread of extremism are in their early stages, there remains a worrying amount of it on the platform. Here’s Ben Makuch in Motherboard today:

But even in the face of those horrific terror attacks, YouTube continues to be a bastion of white nationalist militancy. Over the last few days, Motherboard has viewed white nationalist and neo-Nazi propaganda videos on the website that have either been undetected by YouTube, have been allowed to stay up by the platform, or have been newly uploaded.

When examples were specifically shown to YouTube by Motherboard, the company told us that it demonetized the videos, placed them behind a content warning, removed some features such as likes and comments, and removed them from recommendations—but ultimately decided to leave the videos online. The videos remain easily accessible via search.

Last month, writing about the difference between platform problems and internet problems, I noted that the ultimate answer we are groping for is how free the internet should be. The openness of YouTube has benefited a large and diverse group of creators, most of whom are innocuous. But reading today about Cole and Savannah LaBrant, internet-famous parents who tricked their 6-year-old daughter into believing they were giving away her puppy and filmed her reaction, it’s fair to ask why YouTube so often leads its creators to madness.

Extremism in all its forms is not a problem that YouTube can solve alone. What makes Bergen’s report so disturbing, though, is the way YouTube unwittingly promoted extremists until they had become one of its most powerful constituencies. In very real ways, extremism is a pillar of the platform, and unwinding the best of YouTube from its rotting heart promises to be as difficult as anything the company has ever done.

Democracy

As India Votes, False Posts and Hate Speech Flummox Facebook

India has seen a flood of fake news ahead of its next election, Vindu Goel and Sheera Frenkel report:

The flood of fake posts gave Facebook a taste of what is to come as India prepares for the world’s biggest election. Prime Minister Narendra Modi and his Bharatiya Janata Party are seeking another five years in power, and as many as 879 million people are expected to vote over five weeks starting on April 11.

But as campaigning goes into high gear, Facebook is already struggling to cope with the disinformation and hate speech on its core social network and on WhatsApp, its popular messaging service.

What happens next in the housing discrimination case against Facebook?

Adi Robertson examines HUD’s legal strategy in its lawsuit against Facebook:

HUD is also making some additional claims that could complicate Facebook’s defense. In addition to calling out tools that let advertisers select audience categories, it’s condemning the invisible process Facebook uses to serve ads. “[Facebook’s] ad delivery system prevents advertisers who want to reach a broad audience of users from doing so,” it says, because it’s likely to steer away from “users whom the system determines are unlikely to engage with the ad, even if the advertiser explicitly wants to reach those users.”

HUD doesn’t have to establish that these targeting algorithms are designed to avoid showing ads to certain protected classes. It just has to demonstrate that the system effectively makes housing less accessible to these people — a concept known as disparate impact. “If there is an algorithm that just happens to discriminate against racial minorities or gender minorities or whatever, I think it would still be problematic,” says Glatthaar’s colleague Adam Rodriguez. He compares the move to a zoning restriction whose text and intent is race-neutral but that directly results in fewer black residents, which would likely still be considered discriminatory.

Facebook’s new tools to block discriminatory ads will not apply outside the United States

Catherine McIntyre reports that Facebook’s announcement that it will prevent advertisers from discriminating against certain protected categories only applies in America:

The social media giant said it would block features allowing advertisers to discriminate based on age and gender two weeks ago. However, the changes will only apply in the United States. And, tests conducted by The Logic show that Facebook is currently approving ads in Canada that appear to discriminate. 

Googlers protest AI advisory board member over anti-LGBT, anti-immigrant comments

Ina Fried reports that Google has no plans to reverse its decision to put Heritage Foundation president Kay Coles James, who has made anti-LGBT and anti-immigrant comments, on a key AI advisory panel.

Google staff condemn treatment of temp workers in ‘historic’ show of solidarity

More than 900 employees have signed a letter criticizing the treatment of contractors, Julia Carrie Wong reports:

In March, Google abruptly shortened the contracts of 34 temp workers on the “personality” team for Google Assistant – the Alexa-like digital assistant that reads you the weather, manages your calendar, sends a text message, or calls you an Uber through your phone or smart speaker.

The cuts, which affected contractors around the globe, reinvigorated the debate over Google’s extensive use of TVCs, amid a growing labor movement within the company. In recent months, Google FTEs and TVCs have been increasingly vocal in protesting both their working conditions and the ethics of their employer.

Elsewhere

Inside Grindr, fears that China wanted to access user data via HIV research

 Tim Fitzsimons reports that after its acquisition by a Chinese company, Grindr considered sharing HIV data with the country. It’s unclear what China would have done with the data:

On July 3, 2018, Chen informed three Grindr employees that Yiming Shao, an HIV researcher for China’s equivalent of the U.S. Centers for Disease Control and Prevention, was interested in working with Grindr. To facilitate this project, Chen wrote an email to the employees — obtained by NBC News — that suggested putting a full-time “intern” in Grindr’s West Hollywood, California, headquarters to do research and work on a paper about HIV prevention that would be co-published with the company.

“They are attracted by our brand, reach and data,” Chen wrote in the email. “We need to be extremely careful about their data request. Yiming is head of HIV prevention in China CDC. We can’t let people say this is about ‘sharing user data with the Chinese government.’”

Quibi Taps Tom Conrad, a Snap and Pandora Alum, as Chief Product Officer

Tom Conrad, who ran product at Snap, has taken on a similar role at at Quibi, Jeffrey Katzenberg’s short-form subscription video company.

Launches

WhatsApp launches fact-checking service in India ahead of elections

WhatsApp is launching a fact-checking service in India ahead of the country’s upcoming elections:

Reuters reports that users can now forward messages to the Checkpoint Tipline, where a team lead by local startup Proto will asses and mark them as either “true,” “false,” “misleading,” or “disputed.” These messages will also be used to create a database to study and understand the spread of misinformation. India’s elections are due to start on April 11th, and final results are expected on May 23rd.

You’ve heard of fake news — how about fake gadgets? My colleague Ashley Carman has a great new series on YouTube and in the first episode she writes about the wild world of knockoffs. Check it out:

A gadget maker’s worst nightmare...

Takes

Google’s constant product shutdowns are damaging its brand

Google shut down Google+ and Inbox today, and Ron Amadeo is not happy about it:

We are 91 days into the year, and so far, Google is racking up an unprecedented body count. If we just take the official shutdown dates that have already occurred in 2019, a Google-branded product, feature, or service has died, on average, about every nine days.

Some of these product shutdowns have transition plans, and some of them (like Google+) represent Google completely abandoning a user base. The specifics aren’t crucial, though. What matters is that every single one of these actions has a negative consequence for Google’s brand, and the near-constant stream of shutdown announcements makes Google seem more unstable and untrustworthy than it has ever been. Yes, there was the one time Google killed Google Wave nine years ago or when it took Google Reader away six years ago, but things were never this bad.

And finally ...

Google begins shutting down its failed Google+ social network

People still start social networks every day, and to founders wondering whether they are seeing any traction, I invite you to see whether your app passes what I like to call the Google+ test for user engagement. (Emphasis mine.)

Google has acknowledged that Google+ failed to meet the company’s expectations for user growth and mainstream pickup. “While our engineering teams have put a lot of effort and dedication into building Google+ over the years, it has not achieved broad consumer or developer adoption, and has seen limited user interaction with apps,” Google’s Ben Smith wrote in October. He then revealed a pretty damning stat for where the service stands today: “90 percent of Google+ user sessions are less than five seconds.”’

RIP in peace Google+!!!

Talk to me

Send me tips, comments, questions, and your YouTube fixes: casey@theverge.com.