Skip to main content

Upload filters and one-hour takedowns: the EU’s latest fight against terrorism online, explained

The EU wants to stop online extremist content in its tracks

Share this story

An illustration of the EU flag.
Illustration by Alex Castro / The Verge

Though acts of terrorism take place in the real world, they attain a kind of online afterlife. Materials like those from the recent Christchurch shooting proliferate as supporters upload them to any media platform they can reach. Lawmakers in Europe have had enough, and this year, they hope to enact new legislation that will hold big tech companies like Facebook and Google more accountable for any terrorist-related content they host.

The legislation was first proposed by the EU last September as a response to the spread of ISIS propaganda, which experts said encouraged further attacks. It covers recruiting materials such as displays of a terrorist organization’s strength, instructions for how to carry out acts of violence, and anything that glorifies the violence itself.

The legislation could mean tech firms are fined millions

Social media is an important part of terrorists’ recruitment strategy, say backers of the legislation. “Whether it was the Nice attacks, whether it was the Bataclan attack in Paris, whether it’s Manchester, [...] they have all had a direct link to online extremist content,” says Lucinda Creighton, a senior adviser at the Counter Extremism Project (CEP), a campaign group that has helped shape the legislation.

The new laws require platforms to take down any terrorism-related content within an hour of a notice being issued, force them to use a filter to ensure it’s not reuploaded, and, if they fail in either of these duties, allow governments to fine companies up to 4 percent of their global annual revenue. For a company like Facebook, which earned close to $17 billion in revenue last year, that could mean fines of as much as $680 million (around €600 million).

Advocates of the legislation say it’s a set of common-sense proposals that are designed to prevent online extremist content from turning into real-world attacks. But critics, including internet freedom think tanks and big tech firms, claim the legislation threatens the principles of a free and open internet, and it may jeopardize the work being done by anti-terrorist groups.

The proposals are currently working their way through the committees of the European Parliament, so a lot could change before the legislation becomes law. Both sides want to find a balance between allowing freedom of expression and stopping the spread of extremist content online, but they have very different ideas about where this balance lies.  

Why is the legislation needed?

Terrorists use social media to promote themselves, just like big brands do. Organizations such as ISIS use online platforms to radicalize people across the globe. Those people may then travel to join the organization’s ranks in person or commit terrorist attacks in support of ISIS in their home countries.

“Every attack over the last 18 months or two years or so has got an online dimension.”

At its peak, ISIS had a devastatingly effective social media strategy, which both instilled fear in its enemies and recruited new supporters. In 2019, the organization’s physical presence in the Middle East has been all but eliminated, but the legislation’s supporters argue that this means there’s an even greater need for tougher online rules. As the group’s physical power has diminished, the online war of ideas is more important than ever.

“Every attack over the last 18 months or two years or so has got an online dimension. Either inciting or in some cases instructing, providing instruction, or glorifying,” Julian King, a British diplomat and European commissioner for the Security Union, told The Guardian when the laws were first proposed.

King, who has been a driving force behind the new legislation within the European Union, says the increasing frequency with which terrorists become “self-radicalized” by online material shows the importance of the proposed laws.

Why a one-hour takedown limit?

The one-hour takedown is one of two core obligations for tech firms proposed by the legislation.

If content is left up for more than one hour, “its viewership increases tenfold”

Under the proposals, each EU member state will designate a so-called “competent authority.” It’s up to each member state to decide exactly how this body operates, but the legislation says they’re responsible for flagging problematic content. This includes videos and images that incite terrorism, that provide instructions for how to carry out an attack, or that otherwise promote involvement with a terrorist group.

Once content has been identified, this authority would then send out a removal order to the platform that’s hosting it, which can then delete it or disable access for any users inside the EU. Either way, action needs to be taken within one hour of a notice being issued.

It’s a tight time limit, but removing content this quickly is important to stop its spread, according to Creighton.

Creighton says that the organization’s research suggests that if content is left up for more than one hour, “its viewership increases tenfold.” Although this research was focused on YouTube, the legislation would apply the same time limit across all social media platforms, from major sites like Facebook and Twitter, right down to smaller ones like Mastodon and, yes, even Gab.

“It’s been taken down and it’s reappearing a day or two or a week later.”

This obligation is similar to voluntary rules that are already in place that encourage tech firms to take down content flagged by law enforcement and other trusted agencies in an hour.

What’s new, though, is the addition of a legally mandated upload filter, which would hypothetically stop the same pieces of extremist content from being continuously reuploaded after being flagged and removed — although these filters have sometimes been easy to bypass in the past.

“The frustrating thing is that [extremist content] has been flagged with the tech companies, it’s been taken down and it’s reappearing a day or two or a week later,” Creighton says, “That has to stop and that’s what this legislation targets.”

The filter proposed by Creighton and her peers would use software to generate a code known as a “hash” for any extremist content when it’s identified by a human moderator. This means any content uploaded in the future can be checked quickly against this database of hashes and blocked if a match is found.

Creighton says software like this has been instrumental in stopping the spread of child abuse content online, and a similar approach could work for extremist content.

Identifying extremist content isn’t quite the same as identifying child abuse content, however. There is no legitimate use of videos depicting child abuse, but some extremist content may be newsworthy. After the recent Christchurch shooting, for example, YouTube’s moderation team had to manually review reuploads of the shooter’s footage to make sure news coverage using the footage wasn’t inadvertently blocked.

So what’s the problem?

Critics say that the upload filter could be used by governments to censor their citizens, and that aggressively removing extremist content could prevent non-governmental organizations from being able to document events in war-torn parts of the world.

“We don’t think it’s right for government to force companies to install technology in this way.”

One prominent opponent is the Center for Democracy and Technology (CDT), a think tank funded in part by Amazon, Apple, Facebook, Google, and Microsoft. Earlier this year, it published an open letter to the European Parliament, saying the legislation would “drive internet platforms to adopt untested and poorly understood technologies to restrict online expression.” The letter was co-signed by 41 campaigners and organizations, including the Electronic Frontier Foundation, Digital Rights Watch, and Open Rights Group.

“These filtering technologies are certainly being used by the big platforms, but we don’t think it’s right for government to force companies to install technology in this way,” the CDT’s director for European affairs, Jens-Henrik Jeppesen, told The Verge in an interview.

Removing certain content, even if a human moderator has correctly identified it as extremist in nature, could prove disastrous for the human rights groups that rely on them to document attacks. For instance, in the case of Syria’s civil war, footage of the conflict is one of the only ways to prove when human rights violations occur. But between 2012 and 2018, Google took down over 100,000 videos of attacks that were carried out in Syria’s civil war, which destroyed vital evidence of what took place. The Syrian Archive, an organization that aims to verify and preserve footage of the conflict, has been forced to back up footage on its own site to prevent the records from disappearing.  

Opponents of the legislation like the CDT also say that the filters could end up acting like YouTube’s frequently criticized Content ID system. This ID allows copyright owners to file takedowns on videos that use their material, but the system will sometimes remove videos posted by their original owners, and they can misidentify original clips as being copyrighted. It can also be easily circumvented.

“It is disproportionate to have new legislation to see if you can sanitize the remaining 5 percent”

Opponents of the legislation also believe that the current voluntary measures are enough to stop the flow of terrorist content online. They claim the majority of terrorist content has already been removed from the major social networks, and that a user would have to go out of their way to find the content on a smaller site.

“It is disproportionate to have new legislation to see if you can sanitize the remaining 5 percent of available platforms,” Jeppesen says.

“We have zero transparency.”

However, Creighton says that every social network, no matter what its size, should be held to the same standards and that these standards should be democratically decided. At the moment, every social network has its own internal tools and processes that it uses to moderate content, and there’s very little public information about these.

Right now, “every tech company is basically applying and adhering to their own rules,” says Creighton. “We have zero transparency.”

Under the proposals, every tech company could be forced to use the same filtering technology. That means they’d benefit from sharing findings across platforms, between EU member states, and with law enforcement bodies like Europol. That’s great if you believe in the ability of the EU to enforce the rule of law, but it has the potential to lock out non-governmental bodies like the Syrian Archive if governments don’t give them the authority to access the extremist content.

These organizations need to be able to view this content, no matter how troubling it might be, in order to investigate war crimes. Their independence from governments is what makes their work valuable, but it could also mean they’re shut out under the new legislation.

Creighton doesn’t believe free and public access to this information is the answer. She argues that needing to “analyze and document recruitment to ISIS in East London” isn’t a good enough excuse to leave content on the internet if the existence of that content “leads to a terrorist attack in London, or Paris or Dublin.”

What happens next?

The legislation is currently working its way through the European Parliament, and its exact wording could yet change. At the time of publication, the legislation’s lead committee is currently due to vote on its report on the draft regulation on April 1st. After that, it must proceed through the trilogue stage — where the European Commission, the Council of the European Union, and the European Parliament debate the contents of the legislation — before it can finally be voted into law by the European Parliament.

Because the bill is so far away from being passed, neither its opponents nor its supporters believe a final vote will take place any sooner than the end of 2019. That’s because the European Parliament’s current term ends next month, and elections must take place before the next term begins in July.

A final vote could happen by the end of the year

That timing means trouble for the bill. The UK is still scheduled to leave the EU this year, and a major force behind the bill has been British diplomat Julian King. Should Brexit go through, he will no longer be involved. Further complicating matters is that the MEP who’s chairing the lead committee on the legislation, Claude Moraes, is also British.

The departure of King and Moraes from the EU government is unlikely to kill the bill entirely, but Creighton suggests it could reduce the legislation’s political momentum.

“I think the objective now has to be for Julian King to get this as far as he possibly can before he vacates office, and then hope that it’ll be it’ll be taken up very quickly again by the next parliament,” she says.

If the events of the last month have taught us anything, it’s that major platforms aren’t prepared for how they can be manipulated by terrorists and their supporters with floods of extremist content. The EU has the size and scale to actually intervene, but there’s a fine line between help and crushing a platform’s ability to solve its own problems.