Skip to main content

The tier list: how Facebook decides which countries need protection

Leaked documents reveal a huge, opaque system

Share this story

Illustration by Alex Castro / The Verge

At the end of 2019, the group of Facebook employees charged with preventing harms on the network gathered to discuss the year ahead. At the Civic Summit, as it was called, leaders announced where they would invest resources to provide enhanced protections around upcoming global elections — and also where they would not. In a move that has become standard at the company, Facebook had sorted the world’s countries into tiers.

Brazil, India, and the United States were placed in “tier zero,” the highest priority. Facebook set up “war rooms” to monitor the network continuously. They created dashboards to analyze network activity and alerted local election officials to any problems. 

Germany, Indonesia, Iran, Israel, and Italy were placed in tier one. They would be given similar resources, minus some resources for enforcement of Facebook’s rules and for alerts outside the period directly around the election. 

In tier two, 22 countries were added. They would have to go without the war rooms, which Facebook also calls “enhanced operations centers.” 

The rest of the world was placed into tier three. Facebook would review election-related material if it was escalated to them by content moderators. Otherwise, it would not intervene. 

Documents show significant variation in content moderation resources afforded to different countries

The system is described in disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by Frances Haugen’s legal counsel. A consortium of news organizations, including Platformer and The Verge, has obtained the redacted versions received by Congress. Some documents served as the basis for earlier reporting in The Wall Street Journal.

The files contain a wealth of documents describing the company’s internal research, its efforts to promote users’ safety and well-being, and its struggles to remain relevant to a younger audience. They highlight the degree to which Facebook employees are aware of the gaps in their knowledge about issues in the public interest and their efforts to learn more. 

But if one theme stands out more than others, it’s the significant variation in content moderation resources afforded to different countries based on criteria that are not public or subject to external review. For Facebook’s home country of the United States, and other countries considered at high risk of political violence of social instability, Facebook offers an enhanced suite of services designed to protect the public discourse: translating the service and its community standards into the official languages; building AI classifiers to detect hate speech and misinformation in those languages; and staffing teams to analyze viral content and respond quickly to hoaxes and incitement to violence on a 24/7 basis. 

Other countries, such as Ethiopia, may not even have the company’s community standards translated into all of its official languages. Machine learning classifiers to detect hate speech and other harms are not available. Fact-checking partners don’t exist. War rooms never open. 

For a regular company, it’s hardly controversial to allocate resources differently based on market conditions. But given Facebook’s key role in civic discourse — it effectively replaces the internet in some countries — the disparities are cause for concern. 

For years now, activists and lawmakers around the world have criticized the company for the inequality in its approach to content moderation. But the Facebook Papers offer a detailed look into where Facebook provides a higher standard of care — and where it doesn’t. 

Among the disparities:

  • Facebook lacked misinformation classifiers in Myanmar, Pakistan, and Ethiopia, countries designated at highest risk last year.
  • It also lacked hate speech classifiers in Ethiopia, which is in the midst of a bloody civil conflict.
  • In December 2020, an effort to place language experts into countries had only succeeded in six of ten “tier one” countries and zero tier two countries.

Miranda Sissons, Facebook’s director of human rights policy, told me that allocating resources in this way reflects the best practices suggested by the United Nations in its Guiding Principles on Business and Human Rights. Those principles require businesses to consider the human rights impact of their work and work to mitigate any issues based on their scale, severity, and whether the company can design an effective remedy for them. 

Sissons, a career human rights activist and diplomat, joined Facebook in 2019. That was the year the company began developing its approach to what the company calls “at-risk countries” — places where social cohesion is declining and where Facebook’s network and powers of amplification risk incitements to violence. 

Facebook can conduct sophisticated intelligence operations when it chooses to

The threat is real: other documents in the Facebook Papers detail how new accounts created in India that year would quickly be exposed to a tide of hate speech and misinformation if they followed Facebook’s recommendations. (The New York Times detailed this research on Saturday.) And even at home in the United States, where Facebook invests the most in content moderation, documents reflect the degree to which employees were overwhelmed by the flood of misinformation on the platform leading up to the January 6th Capitol attack. (The Washington Post and others described these records over the weekend.)

Documents show that Facebook can conduct sophisticated intelligence operations when it chooses to. An undated case study into “adversarial harm networks in India” examined the Rashtriya Swayamsevak Sangh, or RSS — a nationalist, anti-Muslim paramilitary organization — and its use of groups and pages to spread inflammatory and misleading content. 

The investigation found that a single user in the RSS had generated more than 30 million views. But the investigation noted that, to a large extent, Facebook is flying blind: “Our lack of Hindi and Bengali classifiers means much of this content is never flagged or actioned.” 

One solution could be to penalize RSS accounts. But the group’s ties to India’s nationalist government made that a delicate proposition. “We have yet to put forth a nomination for designation of this group given political sensitivities,” the authors said. 

Facebook likely spends more on integrity efforts than any of its peers, though it is also the largest of the social networks. Sissons told me that ideally, the company’s community standards and AI content moderation capabilities would be translated into the languages of every country where Facebook is operating. But even the United Nations supports only six official languages; Facebook has native speakers moderating posts in more than 70.

Even in countries where Facebook’s tiers appear to limit its investments, Sissons said, the company’s systems regularly scan the world for political instability or other risks of escalating violence so that the company can adapt. Some projects, such as training new hate speech classifiers, are expensive and take many months. But other interventions can be implemented quicker.

Still, documents reviewed by The Verge also show the way that cost pressures appear to affect the company’s approach to policing the platform. 

“These are not easy trade-offs to make.”

In a May 2019 note titled “Maximizing the Value of Human Review,” the company announced that it would create new hurdles to users reporting hate speech in hopes of reducing the burden on its content moderators. It also said it would automatically close reports without resolving them in cases where few people had seen the post or the issue reported was not severe. 

The author of the note said that 75 percent of the time, reviewers found hate speech reports did not violate Facebook’s community standards and that reviewers’ time would be better spent proactively looking for worse violations. 

But there were concerns about expenses as well. “We’re clearly running ahead of our [third-party content moderation] review budget due to front-loading enforcement work and will have to reduce capacity (via efficiency improvements and natural rep attrition) to meet the budget,” the author wrote. “This will require real reductions in viewer capacity through the end of the year, forcing trade-offs.” 

Employees have also found their resources strained in the high-risk countries that the tier system identifies.

“These are not easy trade-offs to make,” notes the introduction to a note titled “Managing hostile speech in at-risk countries sustainably.” (Facebook abbreviates these countries as “ARCs.”)  

“Supporting ARCs also comes at a high cost for the team in terms of crisis response. In the past months, we’ve been asked to firefight for India election, violent clashes in Bangladesh, and protests in Pakistan.” 

The note says that after a country is designated a “priority,” it typically takes a year to build classifiers for hate speech and to improve enforcement. But not everything gets to be a priority, and the tradeoffs are difficult indeed.

“We should prioritize building classifiers for countries with on-going violence … rather than temporary violence,” the note reads. “For the latter case, we should rely on rapid response tools instead.” 

A pervasive sense that, on some fundamental level, no one is entirely sure what’s going on

After reviewing hundreds of documents and interviewing current and former Facebook employees about them, it’s clear that a large contingent of workers within the company are trying diligently to rein in the platform’s worst abuses, using a variety of systems that are dizzying in their scope, scale, and sophistication. It’s also clear that they are facing external pressures over which they have no control — the rising right-wing authoritarianism of the United States and India did not begin on the platform, and the power of individual figures like Donald Trump and Narendra Modi to promote violence and instability should not be underestimated.

And yet, it’s also hard not to marvel once again at Facebook’s sheer size; the staggering complexity of understanding how it works, even for the people charged with operating it; the opaque nature of systems like its at-risk countries “work stream”; and the lack of accountability in cases where, as in Myanmar, the whole thing spun violently out of control.

Some of the most fascinating documents in the Facebook Papers are also the most mundane: cases where one employee or another wonders out loud what might happen if Facebook changed this input to that one or ratcheted down this harm at the expense of that growth metric. Other times, the documents find them struggling to explain why the algorithm shows more “civic content” to men than women or why a bug let some violence-inciting group in Sri Lanka automatically add half a million people to a group — without their consent — over a three-day period.

There is a pervasive sense that, on some fundamental level, no one is entirely sure what’s going on.

In the documents, comment threads pile up as everyone scratches their heads. Employees quit and leak them to the press. The communications team reviews the findings and writes up a somber blog post and affirms that There Is More Work To Do. 

Congress growls. Facebook changes its name. The world’s countries, neatly arranged into tiers, hold their breath.

Platformer by Casey Newton /

This column was co-published with Platformer, a daily newsletter about Big Tech and democracy.

Subscribe here

Today’s Storystream

Feed refreshed Two hours ago Midjourneys

E
External Link
Emma RothTwo hours ago
Celsius’ CEO is out.

Alex Mashinsky, the head of the bankrupt crypto lending firm Celsius, announced his resignation today, but not after patting himself on the back for working “tirelessly to help the company.”

In Mashinsky’s eyes, I guess that means designing “Unbankrupt yourself” t-shirts on Cafepress and then selling them to a user base that just had their funds vaporized.

At least customers of the embattled Voyager Digital crypto firm are in slightly better shape, as the Sam Bankman-Fried-owned FTX just bought out the company’s assets.


M
Twitter
Mary Beth GriggsTwo hours ago
NASA’s SLS rocket is secure as Hurricane Ian barrels towards Florida.

The rocket — and the Orion spacecraft on top — are now back inside the massive Vehicle Assembly Building. Facing menacing forecasts, NASA decided to roll it away from the launchpad yesterday.


A
External Link
Andrew J. Hawkins1:30 PM UTC
Harley-Davidson’s electric motorcycle brand is about to go public via SPAC

LiveWire has completed its merger with a blank-check company and will make its debut on the New York Stock Exchange today. Harley-Davison CEO Jochen Zeitz called it “a proud and exciting milestone for LiveWire towards its ambition to become the most desirable electric motorcycle brand in the world.” Hopefully it also manages to avoid the cash crunch of other EV SPACs, like Canoo, Arrival, Faraday Future, and Lordstown.


A
The Verge
Andrew Webster1:06 PM UTC
“There’s an endless array of drama going on surrounding Twitch right now.”

That’s Ryan Morrison, CEO of Evolved Talent Agency, which represents some of the biggest streamers around. And he’s right — as you can read in this investigation from my colleague Ash Parrish, who looked into just what’s going on with Amazon’s livestreaming service.


R
The Verge
Richard Lawler12:59 PM UTC
Green light.

NASA’s spacecraft crashed, and everyone is very happy about it.

Otherwise, Mitchell Clark is kicking off the day with a deeper look at Dish Network’s definitely-real 5G wireless service , and Walmart’s metaverse vision in Roblox is not looking good at all.


J
External Link
Jess Weatherbed11:49 AM UTC
Won’t anyone think of the billionaires?

Forbes reports that rising inflation and falling stock prices have collectively cost members of the Forbes 400 US rich list $500 billion in 2022 with tech tycoons suffering the biggest losses.

Jeff Bezos (worth $151 billion) lost $50 billion, Google’s Larry Page and Sergey Brin (worth a collective $182b) lost almost $60b, Mark Zuckerberg (worth $57.7b) lost $76.8b, and Twitter co-founder Jack Dorsey (worth $4.5b) lost $10.4b. Former Microsoft CEO Steve Ballmer (worth $83b) lost $13.5b while his ex-boss Bill Gates (worth $106b) lost $28b, albeit $20b of that via charity donations.


T
Thomas Ricker6:45 AM UTC
Check out this delightful DART Easter egg.

Just Google for “NASA DART.” You’re welcome.


R
Twitter
Richard Lawler12:00 AM UTC
A direct strike at 14,000 mph.

The Double Asteroid Redirection Test (DART) scored a hit on the asteroid Dimorphos, but as Mary Beth Griggs explains, the real science work is just beginning.

Now planetary scientists will wait to see how the impact changed the asteroid’s orbit, and to download pictures from DART’s LICIACube satellite which had a front-row seat to the crash.


M
The Verge
We’re about an hour away from a space crash.

At 7:14PM ET, a NASA spacecraft is going to smash into an asteroid! Coverage of the collision — called the Double Asteroid Redirection Test — is now live.


E
Twitter
Emma RothSep 26
There’s a surprise in the sky tonight.

Jupiter will be about 367 million miles away from Earth this evening. While that may seem like a long way, it’s the closest it’s been to our home planet since 1963.

During this time, Jupiter will be visible to the naked eye (but binoculars can help). You can check where and when you can get a glimpse of the gas giant from this website.