Skip to main content

Facebook says it will begin removing misinformation that leads to violence

Facebook says it will begin removing misinformation that leads to violence


The policy was used for the first time last month in Sri Lanka

Share this story

Photo by Michele Doying / The Verge

Hours after CEO Mark Zuckerberg spurred history by defending the rights of Holocaust deniers to post on Facebook, the company said it had begun removing misinformation that contributes to violence. “There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down,” the company said in a statement. “We will be begin implementing the policy during the coming months.”

Under the new policy, Facebook will begin reviewing posts that are inaccurate or misleading, and are created or shared with the intent of causing violence or physical harm. The posts will be reviewed in partnership with local organizations including threat intelligence agencies, which Facebook says are in the best position to evaluate threats. Posts covered by the policy include manipulated imagery as well as text.

Partners are asked to verify that the posts in question are false and could contribute to imminent violence or harm, Facebook said. The company said it would rely on reports from partners, the press, and its own public policy employees. When Facebook has verified a report, it will remove the post, along with any duplicate posts that have been created.

While the new policy was announced today, Facebook put it into place last month, the company said. Posts falsely stating that Muslims in Sri Lanka were poisoning food given to Buddhists were removed after an investigation. Sri Lanka temporarily shut down Facebook earlier this year after hate speech spread on the company’s apps resulted in mob violence.