Skip to main content

The European Commission wants Facebook, Google to remove terrorist content within an hour after being flagged

The European Commission wants Facebook, Google to remove terrorist content within an hour after being flagged


It’s part of a new set of recommendations to combat illegal content online

Share this story

Pro-Europe Protesters Wave EU Flags Outside Parliament
Photo by Leon Neal/Getty Images,

The European Commission has sent out expansive guidelines aimed at Facebook, Google, and other tech companies on removing terrorist and other illegal content online. The commission outlined recommendations, which apply to all forms of illegal content, including terrorist media, child sexual abuse, counterfeit products, copyright infringement, and material that incites hatred and violence. The recommendations also specify clearer procedures, more efficient tools, and stronger safeguards including human oversight and verification, so something that’s incorrectly flagged can be restored.

As part of the new guidelines, the commission demands that companies remove terrorist content without an hour after it gets referred, stating that such content is most harmful in the first few hours they appear online.

Vice president of the European Commission, Andrus Ansip, said:

“While several platforms have been removing more illegal content than ever before – showing that self-regulation can work – we still need to react faster against terrorist propaganda and other illegal content which is a serious threat to our citizens’ security, safety and fundamental rights.”

Automated detection, tools to prevent re-uploads, and an improved referral system between EU member states are some other edicts the commission is recommending. This is the latest in the EU’s efforts to tackle illegal content.

The commission is suggesting these operational measures as a soft law before it decides whether or not to propose legislation. The recommendations are non-binding, but they can still be used as legal references in court, notes The Wall Street Journal. The commission says they are designed to make the process of flagging down and removing inappropriate content faster and to reinforce cooperation between companies, trusted flaggers, and law enforcement authorities.

According to WSJ, tech companies are wary the guidelines may infringe on freedom of expression. Executives told the publication that they have already been “stretched to meet the EU’s existing demands in the area.”

Tech companies have been working with lawmakers in regards to inappropriate content for years. Facebook previously said it wants to be a “hostile place” for terrorists and is using a mix of AI and human intervention to root out terrorist content. YouTube also announced new steps last year including automated systems and additional flaggers to fight extremism on its platform. In 2016, Facebook, Twitter, Microsoft, and YouTube signed an EU code of conduct on countering hate speech online.

Under the new recommendations, tech companies and EU member states are required to post regular reports about how terrorism content is being handled within three months, and within six months for other illegal content.