Skip to main content

The software behind Facebook’s new Supreme Court for content moderation

The software behind Facebook’s new Supreme Court for content moderation

/

New details on Facebook’s independent oversight board

Share this story

Illustration by Alex Castro / The Verge

As we start looking forward to 2020 — and your predictions are coming to this space tomorrow — a big subject on my mind is accountability. Tech platforms have evolved into quasi-states with only the most rudimentary of justice systems. If trolls brigade Twitter into suspending your account, or you’re removed from YouTube’s monetization program, or your tasteful nudes are banned from Instagram, you typically have almost no recourse. In most cases you will fill out a short form, send it in, and pray. In a few days, you’ll hear back from a robot about the decision in your case — unless you don’t.

That’s why I’ve been so interested this year in Facebook’s development of what it calls an “oversight board” — a Supreme Court for content moderation. When it launches next year, the board will hear appeals from people whose posts might have been removed from Facebook in error, as well as making judgments on emerging policy disputes at Facebook’s request. And the big twist is that the board will be independent of Facebook — funded by it, but accountable only to itself.

Recently I went down to Facebook’s headquarters in Menlo Park to meet some of the people working on the project. Milancy Harris, manager for governance and strategic initiatives, and Fay Johnson, who is the lead product manager on the oversight team, talked to me about how they have approached the unique assignment of devolving content moderation power from the company to an independent board.

Facebook set up an irrevocable trust to fund the board, and last week the company said it had agreed to fund it to the tune of $130 million — a figure far higher than some of the civil-society folks I’ve spoken with had expected.

It’s a figure that speaks to the seriousness with which Facebook has approached the project. It also speaks to the sheer complexity of what Facebook is trying to do. The company spent the past year consulting experts, holding mock appeals, developing the board’s bylaws, and recruiting board members. Simultaneously, Harris and Johnson have been among those working on what Facebook calls its “case management tool” — the hyper-niche software, to be used by perhaps 100 people at a time, that will route cases from Facebook to the board and its staff.

Here are some of the things I learned from Harris and Johnson.

  • For starters, you’ll only be able to appeal decisions to remove your content. Facebook itself can request “expedited review” for basically anything — including asking the board to make a call in cases where the company itself has not yet decided what to do. You could imagine, for example, Facebook referring the infamous altered footage of Nancy Pelosi to the board to request a ruling before it makes a decision. (Facebook says you’ll be able to appeal stuff that is left up against your will eventually.)
  • Facebook can also request what it’s calling a “policy advisory opinion,” asking the board to weigh in on a general policy matter independent of an individual post. What exceptions should Facebook allow to its “real names” policy? When is it OK to show a nipple on Instagram? These are decisions I can imagine the board having been asked about in the past ... and perhaps they still will be in the future.
  • The privacy and security challenges involved in setting up the board are enormous. Facebook’s biggest problems have always come from data sharing, and now it needs to create a new data-sharing apparatus that involves an independent body. It’s arduous, tedious, high-stakes work.
  • When you appeal your case to Facebook, you’ll get send a statement to the board making your case. Facebook promises not to edit it in any way.
  • Your appeals will be added to what is presumably a very long queue of cases to be heard by the board.
  • Facebook doesn’t really know how many cases the board will be able to hear in a year. The initial board will comprise about 20 members, and grow to 40 over time. Members will serve three-year terms.
  • The board will have case-selection committees that are charged with picking cases to hear. Harris told me that Facebook will encourage the board to pick cases that represent geographical diversity and go beyond that day’s public-relations crisis.
  • Facebook is putting together mock queues of cases and taking practice runs to see how long a typical case takes to adjudicate.
  • The board will only have a set (but as yet undetermined) amount of time to decide whether to hear your case — otherwise it will be rejected automatically. The reason Facebook gave me for this is that its own policies require content to be deleted after certain time periods, making an indefinite stay in the queue impossible.
  • The board will publish its decisions, but the user will get to decide whether they want any personally identifiable information included in the decision. So if you’re an artist and your tasteful nudes were banned from Instagram, you might want to attach your name to the decision and amplify your protest. If you’re a human rights worker whose newsworthy photo of carnage got removed, you might opt not to.

One of the miniature debates I had with the Facebook folks was about the decision to limit user appeals, at least at the start, to cases in which posts had been removed. Aren’t posts that stay up generally more harmful? Think about cases where vulnerable people are doxxed or harassed on a social platform and the platform won’t remove it. Isn’t that where this process should start?

Johnson told me that she has a different perspective. “If you’re a person who feels that you’re being targeted because you’re speaking about, let’s say racism in the United States, and those sort of cases where people are potentially being silenced because somebody is interpreting it wrong — that is really impacting your psyche, your well being, your voice,” Johnson told me. “Those [are] situations where I’m like, oh man, I would love this board to be able to help us. Just to think through the better way to handle some of those nuances.”

I had lots more questions, but around that time the meeting ended the way all meetings end at Facebook — with us getting kicked out of our conference room by the next group that had booked it. The good news is that 2020 should bring us lots more information about the board, including its first batch of members.

What do you want to know about the board? Let me know and I’ll make sure to ask the next time I get the chance. 

The Ratio

Today in news that could affect public perception of the big tech platforms.

🔃 Trending sideways: Twitter and Facebook are exploring decentralization projects, supposedly to put more power in the hands of users. But the moves might also help them shift the burden of identifying bad actors and filtering out disinformation.

🔽 Trending down: Facebook is labeling ads for maternity clothes that feature real pregnant women as “sexually suggestive or provocative” and barring them from appearing on the platform.

Governing

The National Labor Relations Board decided yesterday that businesses can ban workers from using company email for union and other organizing purposes. The decision revokes a right granted in 2014 to workers who have access to employers’ email systems. Hassan A. Kanu at Bloomberg has the story:

The decision is a blow to worker advocacy groups and unions, who urged the NLRB to maintain the 2014 policy on the basis that email has become a central and natural way for co-workers to organize and communicate. The policy reversal also marks another step in the Republican-majority NLRB’s push to reinterpret the central federal law on unions in ways that advocates say have made it easier for companies to avoid a unionized workforce.

The NLRB agreed with Caesars, the U.S. Chamber of Commerce, and other business groups, which had argued that employers have property and First Amendment rights to limit the use of their own email systems. Requiring access to email networks also could cause workplace disruption and increase cybersecurity threats, businesses have said.

Employees “do not have a statutory right to use employers’ email and other information-technology (IT) resources to engage in non-work-related communications,” the Board said in a Dec. 17 announcement.

“Rather, employers have the right to control the use of their equipment, including their email and other IT systems, and they may lawfully exercise that right to restrict the uses to which those systems are put, provided that in doing so, they do not discriminate” against union-related communications.

President Trump spent an estimated $648,713.27 on anti-impeachment Facebook ads during a three-week period beginning Nov. 23, according to an analysis of Facebook data by the Democratic digital firm Bully Pulpit Interactive. (Cat Zakrzewski / The Washington Post)

New evidence shows a network of accounts involved in spreading disinformation before the 2016 presidential election also participated in circulating false claims about Marie Yovanovitch, the American ambassador to Ukraine. The claims led to her recall from the US Embassy in Kyiv earlier this year. (Isaac Stanley-Becker  / The Washington Post)

In a Q&A, Rappler’s Maria Ressa discusses her views on how Silicon Valley — and Facebook in particular — warped society. “Facebook broke democracy in many countries around the world, including in mine,” she said. “We’re forever changed because of the decisions made in Silicon Valley.” (Catherine Tsalikis / Centre for International Governance Innovation)

Facebook is investigating a voter engagement app used by The Five Star Movement, a populist party in Italy, as part of a broader probe into potential historical data misuse. (Alberto Nardelli / BuzzFeed)

The House introduced a new bill that would require the federal government to study how a pair of laws that targeted online sex trafficking impacted sex workers, by kicking them off the internet. These laws are the reason Craigslist got rid of personal ads. (Makena Kelly / The Verge)

Democracy requires participation from average people. It requires citizenship! And so I loved this first-person account by former Twitter product manager Sachin Agarwal about being a poll worker in San Francisco. (Sachin Agarwal / Medium)

The digital revolution was supposed to bring about community and civic engagement. Instead, it made people feel isolated, distrustful, and disengaged. Writer Joe Bernstein does a deep dive here on how that happened — and why it took people so long to wake up from the promise of utopia. I expect a lot of people will be referencing this piece in the years to come. (Joseph Bernstein / BuzzFeed)

Industry

How Facebook’s ‘like’ button hijacked our attention and broke the 2010s, becoming the social currency of the internet. Of all the many retrospectives coming out right now on a decade of tech, this one by Fast Company’s Christopher Zara offers a particularly nice look at how a tiny feature had an outsized impact:

The like button, a ridiculously simple yet undeniably inventive way to collect information about people’s interests, promised to be the bold new form of micro-attention capture that the emerging social web was demanding. For Facebook, which was still two years away from its IPO, like buttons were also a signal to investors that its then-stated mission to “make the world more open and connected” could translate into a very profitable form of surveillance capitalism.

“Scattered around the web, [Like buttons] allowed Facebook to follow users wherever they wandered online, sending messages back to the mother ship (‘She’s looking for cruises’),” Columbia University professor Tim Wu points out in his book The Attention Merchants. “That would allow, for example, the Carnival Cruise Line to hit the user with cruise ads the moment they returned to Facebook.”

But if like buttons were a godsend for advertisers, a lifeline for businesses, and a future cash cow for Facebook, there was one group for whom the benefits were less apparent—the users themselves, who now number in the billions.

Instagram is cracking down on influencers’ branded posts and the products they hawk. The company announced that although it’s always prohibited branded posts that advertise vapes, tobacco, and weapons, it’s going to start enforcing those rules more strictly. (Ashley Carman / The Verge)

Facebook, Microsoft, Twitter and others are spending millions on video game streaming to compete with Amazon’s Twitch. But so far, Twitch is still winning. The platform increased its viewership during the third quarter, despite losing Tyler Blevins, better known for his online alias, Ninja, to Microsoft’s Mixer back in August. (Imad Khan / The New York Times)

This writer made a deepfake of Mark Zuckerberg, and process was neither time-consuming nor expensive. (Timothy B. Lee / Ars Technica)

Spotify is prototyping a new way to see what friends have been listening to, called “Tastebuds”. It’s the first truly social feature the company has launched since killing off its inbox in 2017. And it has a great name! (Josh Constine / TechCrunch)

A mobile game called Photo Roulette is gaining popularity with younger users. Players invite up to 49 friends to join, and the app then chooses a random photo from someone’s phone and displays it to the rest of the group (other players have to guess who it came from). What could go wrong?! (Julie Jargon / The Wall Street Journal)

YouTube inspired a toy company to launch a line of dolls (L.O.L. Dolls!) that were made specifically to make for interesting unboxing videos. The result was a $5 billion brand. (Chavie Lieber / The New York Times)

And finally...

Can’t decide if this is evil or perfect.

Either way, I do want this feature for the weight benches at my gym.

Talk to us

Send us tips, comments, questions, and your 2020 social network predictions: casey@theverge.com and zoe@theverge.com.