Skip to main content

Facebook says it won’t loosen COVID-19 policies after Oversight Board request

Facebook says it won’t loosen COVID-19 policies after Oversight Board request

/

It will test telling people whether a human or robot deleted their post

Share this story

Illustration by Alex Castro / The Verge

Facebook has decided whether to adopt recommendations the Facebook Oversight Board made last month when the board decided its first round of appeals from users. Among its responses, Facebook said it would not loosen standards on taking down COVID-19 misinformation, but it would test informing users whether a human or an automated filtering system deleted their post.

The Oversight Board made 17 recommendations based on the six cases it examined. Unlike the board’s decisions regarding the appeals cases, Facebook is not required to adopt the recommendations, but it agrees that it will at least consider them. However, several of the responses were noncommittal, with one promising to “continue to explore how best to provide transparency [...] within the limits of what is technologically feasible.”

Many of Facebook’s plans point toward increasing transparency and clarity around its rules

The one recommendation Facebook says it will not do anything about stems from a case where Facebook removed a user’s post for saying that hydroxychloroquine and azithromycin were effective COVID treatments. The company says it will not follow the recommendation to reduce its enforcement against posts that contain misinformation that could lead to “imminent harm,” as it disagrees with the board’s interpretation of the phrase based on its consultation with global health organizations.

One of the more interesting responses is how Facebook plans to clarify its policies around its ban on “dangerous individuals and organizations,” including hate figures. The board suggested the clarification after Facebook removed a post attributing a quote to Nazi propaganda minister Joseph Goebbels, but the user said they had intended to compare Goebbels unfavorably to former President Donald Trump. Facebook plans to add language to indicate that it could remove posts containing hateful content if it’s not unambiguously condemned.

Facebook also plans to test a feature where it lets users know if one of their posts was automatically removed. If the feature sticks around, it should provide some insight into whether it was a human who removed the post after reviewing it or an automated system.