Meta rescinded a request to have its own Oversight Board weigh in on content moderation policy related to the Russia-Ukraine war, citing “ongoing safety and security concerns.” It’s the first time Meta has withdrawn such a request, setting a precedent that could hurt its working relationship with the board.
“Meta has informed the Oversight Board that the company would be withdrawing an earlier request for policy guidance concerning content moderation issues related to Russia’s ongoing war with Ukraine,” said a statement signed by the board, which is funded by Meta but not staffed by its employees. “While the Board understands these concerns, we believe the request raises important issues and are disappointed by the company’s decision to withdraw it.”
Meta confirmed that the policy advisory request was withdrawn in a blog post, but declined to explain what the request entailed, when it was made, or elaborate on why it was pulled.
I’m sure that Meta’s safety concerns are genuine. It has, after all, been labeled an extremist organization by Russia and banned in the country. But this shows how limited the Oversight Board is in being a true check on the social network’s power, especially in times of crisis. Based on the board’s bylaws, Meta wouldn’t have had to abide by its policy opinion, but it would have needed to respond publicly and explain its rationale for adopting the guidance or not.
It’s unclear what exactly Meta asked the board for advice on, but there is an obvious guess: Reuters reported in early March that Facebook moderators were instructed to temporarily allow for calls of violence against Russian leaders and soldiers invading Ukraine. A few days later, Meta’s policy chief, Nick Clegg, clarified in a leaked memo to employees that the policy wasn’t meant to allow for violent speech targeted at Russian civilians or “a head of state.” The Russian government then accused Meta of “extremist activity.”
There’s also the question of why Meta initially made the request for advice from the board in secret. A spokesperson for Meta, Jeff Gelman, said the company didn’t publicly announce its request when it was made because “we don’t put undue pressure on the board to accept our case referrals over cases appeals from users.” But it has publicized other previous requests, such as when it asked the board last year to weigh in on Cross-Check, a controversial policy that shields celebrities and politicians from the moderation measures applied to other users.
When Meta stood the Oversight Board up in 2020, it was pitched as a Supreme Court-like body for adjudicating the most controversial content moderation decisions on Facebook and Instagram. Led by nearly two-dozen human rights experts, legal scholars, and a former prime minister, the board can overturn Meta’s decisions on specific pieces of content referred by users. Its highest-profile decision to date has been upholding Meta’s move to kick President Trump off of the platform — an initial indefinite ban that was lessened to a two-year ban after the board weighed in.
“So far, the board has made a difference, but only around the edges.”
Since it started hearing cases in late 2020, the board has selected 30 cases related to individual pieces of content and thorny issues like hate speech, nudity, and drug use. It has overturned Meta’s initial decision in most of those cases and recently led the company to change its doxxing policy for the better. But the jury is still out on whether the board will make a difference in the long run, according to Evelyn Douek, an associate research scholar at the Knight First Amendment Institute who closely studies the board.
“So far, the board has made a difference, but only around the edges,” she told The Verge. “I think it has been punching below its weight. To be fair, it does take two to tango, and this incident shows that the board still only has as much influence or power as Meta decides to give it. But the board also has moved slower and with less targeted precision than it could have.”