Earlier today, Google counsel Alexandria Walden testified before Congress in a hearing titled “Hate Crimes and the Rise of White Nationalism.” The House Judiciary Committee streamed the video live on Google’s own YouTube platform, complete with a live chat feed. Anybody who’s even passingly familiar with YouTube might have flagged this as a bad decision. Hateful, racist comments are notoriously common on the site, and, unsurprisingly, some of YouTube’s worst users immediately descended on the chat with slurs and other attacks.
Commenters insulted Jewish committee chair Jerry Nadler, calling him a “goblin,” and some mocked Walden’s fellow witness, Mohammad Abu-Salha, whose two daughters were killed in an alleged hate crime. They espoused the same racist conspiracy theories that the Judiciary Committee was trying to address. Within an hour, the chat was disabled — but not before the incident was covered by several media outlets, including The Washington Post, from which Nadler read in the middle of the hearing.
Nobody needed more evidence that YouTube comments can be racist. But beyond the simple irony of a hearing about white supremacy being overrun by white supremacists, it raises a baffling question: why didn’t Google anticipate that this would happen?
Today’s hearing chat isn’t simply an example of moderators missing some bad comments on the hundreds of hours of video uploaded every minute. The stream was a preplanned event on an official government-run YouTube channel, featuring a prominent Google employee promising that “hate speech and violent extremism have no place on YouTube.” It was sure to be watched by a large audience, including many journalists who are already heavily critical of YouTube.
A recent Bloomberg report claims that YouTube historically ignored toxic content because it drives controversy and engagement. But even if you’re entirely cynical about Google’s motives, there’s a clear incentive to sidestep easily avoidable bad publicity. It’s hard to see what the company gains by making YouTube’s toxicity a top news result for the hearing. And it would have been simple to preemptively turn off live chat or just warn the Judiciary Committee to watch and moderate the comments.
Google certainly isn’t averse to locking comments. YouTube has disabled live chat on Google’s I/O conference streams in the past, and it temporarily banned most comments on videos featuring children earlier this year. In a statement today, Google was matter-of-fact about locking the hearing’s comments. “Hate speech has no place on YouTube. We’ve invested heavily in teams and technology dedicated to removing hateful comments / videos. Due to the presence of hateful comments, we disabled comments on the livestream of today’s House Judiciary Committee hearing,” a spokesperson tweeted.
So why didn’t it just turn off chat before the stream started? It’s possible that Google was worried about being accused of censorship for shutting down comments without clear hate content. After all, there’s been an agonizing debate over how platforms should moderate speech. Despite being nominally about white supremacy, even today’s hearing was quickly hijacked by complaints about alleged online anti-conservative bias, as Rep. Tom McClintock (R-CA) grilled Google and Facebook about whether they were “neutral” platforms.
It’s also possible that Google doesn’t fully grasp how ubiquitous bigoted comments are on YouTube or how inevitable today’s hearing live stream mess seemed from the outside. Google has certainly missed the mark before; YouTube CEO Susan Wojcicki acknowledged that last year’s Rewind retrospective didn’t reflect many users’ experience with the site, for instance. Companies don’t always have a good sense of how people use their products, and YouTube might be no exception.
But today, Google undercut hours of testimony by not accounting for a very obvious problem on its platform in a rare case where the right solution seemed obvious as well.