Skip to main content

Facebook and Google may be using copyright scanners to suppress 'extremist' speech

Facebook and Google may be using copyright scanners to suppress 'extremist' speech

Share this story

The systems that automatically enforce copyright laws on the internet may be expanding to block unfavorable speech. Reuters reports that Facebook, Google, and other companies are exploring automated removal of extremist content, and could be repurposing copyright takedown methods to identify and suppress it. It's unclear where the lines have been drawn, but the systems are likely targeted at radical messages on social networks from enemies of European powers and the United States. Leaders in the US and Europe have increasingly decried radical extremism on the internet and have attempted to enlist internet companies in a fight to suppress it. Many of those companies have been receptive to the idea and already have procedures to block violent and hateful content. Neither Facebook and Google would confirm automation of these efforts to Reuters, which relied on two anonymous sources who are "familiar with the process."

So far, major internet companies have relied on their users to flag illegal or restricted content. Earlier this year, Facebook said its users flag more than one million items for review every day. Twitter has been busy playing whack-a-mole against ISIS-related accounts at a furious pace, suspending 125,000 accounts as of February. And Google said it received over 75 million DMCA takedown requests in just one month in 2016. There's also precedent for using automated systems to flag other kinds of illegal content; several major internet companies, including Microsoft, Twitter, Facebook, and Google, use automated systems to identify the transmission of child pornography.

Existing systems have been abused

But upgrading automated systems for the suppression of extremist content would be a step with potentially serious and unknown consequences, since existing systems that take down content for suspected copyright and other violations deal with huge volumes of information and are routinely abused to suppress legal speech. Additionally, suppression of extremist speech may contain a lot more grey area than clearly-defined illegal content, like pirated media and child pornography.

The secret identification and and automated blocking of extremist speech would raise new, serious questions about the cooperation of private corporations with censorious governmental interests. Governments and private individuals have already attempted in recent years to hold internet companies and service providers liable for the actions of third-parties with varying degrees of success; the EU's right to be forgotten rule now requires companies like Google to comply with individuals who want to scrub search results that point to their sensitive personal information. That rule has already been abused to try to suppress journalism.

Internet companies are aligned with governments on fighting extremism and hate

Unlike measures like the right to be forgotten, which Google fought vigorously, major internet companies have signaled that their interests are closely aligned with governments that want to suppress extremist content — even if they don't want to be held liable for it. Facebook, Google, Twitter, and other companies have already agreed to work with the US and European governments to fight radical propaganda and hate speech. Last year, French president Francois Hollande wanted to make those companies "accomplices" in hate speech crimes. This year, Facebook, Twitter, Google, and Microsoft agreed to EU regulations that require them to review and remove hateful online content, and even promote "independent counter-narratives" to that kind of speech. At the time, Google's head of public policy said the company had always prohibited illegal hate speech, and was "pleased" to work with the government "to develop co and self-regulatory approaches to fighting hate speech online."

The political mood in the US suggests only more pressure will be put on tech companies to deal with the issue. After a terrorist attack on a gay nightclub in Orlando earlier this month, President Obama suggested online extremism could be to blame. FBI Director James Comey went further, saying "we are highly confident that this killer was radicalized at least in some part through the internet." Hillary Clinton said as president she would "work with our great tech companies from Silicon Valley to Boston to step up our game" in suppressing ISIS communications and "mapping jihadist networks." Last December, Donald Trump said the US should consider "closing up" the internet. He plans on asking Bill Gates for help.

How Facebook decides what’s trending