Skip to main content

A Facebook civil rights audit could have unintended consequences

A Facebook civil rights audit could have unintended consequences


A new initiative to fight hate speech will put new pressure on Facebook’s fragile moderators

Share this story

facebook stock art
Illustration by Alex Castro / The Verge

As a longtime listener of her show, I was delighted to speak with Terry Gross on today’s edition of Fresh Air. The subject was my recent pieces on Facebook content moderators. Give it a listen!

By May of 2018, Facebook had received sustained criticism that the platform consistently enabled civil rights abuses. (Much of the criticism came after articles published by ProPublica demonstrating various ways that Facebook’s advertising platform could promote discrimination.) In response, the company announced that it had commissioned an independent civil rights audit — an effort to understand how Facebook promotes discrimination, and to develop recommendations for improvement.

In December, Facebook posted its first update about the audit, saying that the work had led to new efforts to fight voter suppression and encourage voter registration on the platform. And on Sunday, Facebook posted a second update. Joseph Cox summarizes it for us in Vice:

The report itself is split into four sections: content moderation and enforcement; advertising targeting practices; elections and census; and the civil rights accountability structure. With content moderation, the audit focused on harassment on Facebook; the under-enforcement of policies where hate speech is then left on the platform; and Facebook’s over-enforcement of hate speech policies where users have content removed that actually condemned or spoke out against hate speech. The audit was conducted with civil rights law firm Relman, Dane & Colfax and Megan Cacace, one of the firm’s partners. [...]

There are two major developments from this update. The first is that Facebook will work to protect the integrity of the upcoming US Census just as it would a national election. “We’re building a team dedicated to these census efforts and introducing a new policy in the fall that protects against misinformation related to the census,” the company said in its blog post. “We’ll enforce it using artificial intelligence. We’ll also partner with non-partisan groups to help promote proactive participation in the census.”

The second, more consequential development is that Facebook is extending its ban on speech promoting white nationalism. Alex Hern reports in the Guardian:

White nationalism and white separatism were previously allowed on Facebook as the company considered only white “supremacy” to be in breach of its hate speech policies. However, in March 2019 it updated its rules to ban the explicit praise, support or representation of the former two ideologies as well.

Facebook’s chief operating officer, Sheryl Sandberg, said in response to the audit: “We’re addressing this by identifying hate slogans and symbols connected to white nationalism and white separatism to better enforce our policy.

“We also recently updated our policies so Facebook isn’t used to organise events that intimidate or harass people based on their race, religion or other parts of their identity. We now ban posts from people who intend to bring weapons anywhere to intimidate or harass others, or who encourage people to do the same. Civil rights leaders first flagged this trend to us, and it’s exactly the type of content our policies are meant to protect against.”

This is all fairly straightforward. Civil rights groups audited Facebook and found lots of hate speech, and they want the company to eliminate more of it. But reading the report, I couldn’t help but notice one set of voices missing from the discussion: the moderators whose job it is to do all that hate speech removal.

The moderators were on my mind, thanks to a message I had received over the weekend from one of them. The moderator, who identified as a queer person of color, told me that moderating hate speech is the hardest part of the job. They wrote:

I also see the recurring pattern of graphic violence being brought up, and it can be awful at times — but most of our Tampa site consist of people of color. The most draining thing for the majority of us is the extreme amount of hate speech we see very day. There is so much of it there is even a specific queue for it. It’s extremely depressing, and most of our on site “therapist” are older white people who are very out of touch with current times. 

Graphic violence might be 5% of what we see, but at least 60% of my job consists of looking over hate speech and seeing how much people hate people of color and LGBT people. 

[The project] is a complete mess, and I have no idea how it’s still running. Facebook changes its policy every five seconds, and then my bosses get mad at me for not having 98% because I deleted something that is now an ignore. The other day I had to leave up a page with borderline child pornography because I was told we can’t assume a person saying they are 17 years old means they are a minor. 

I absolutely hate that place. They promote and reward based off of a metric (scores) that can be easily manipulated. ... It’s an absolute mess, and almost everyone there knows it. 

One of the ideas coming out of the civil rights audit is to create a dedicated hate speech queue. The idea, according to Facebook chief operating officer Sheryl Sandberg, is that moderators will get better at moderating hate speech if a subset of them are tasked with moderating it continuously. This seems plausible — and so does the possibility that these moderators will be traumatized from the daily exposure to discriminatory posts.

Facebook is only in the earliest stages of surveying moderators about their mental health, as part of an effort to establish a baseline it can improve over time. There are currently no caps on the amount of graphic or racist content that a moderator can be subjected to in a day, and the company says there is currently no research on what levels of exposure are safe for the human mind.

And even before its survey of moderator well-being is finished, Facebook is simultaneously undertaking a new experiment on the mental health of thousands of contractors like the one who wrote me. It’s classic move-fast Facebook — placate one group of vocal critics, even if it puts a less vocal group at risk — and I worry about the consequences. It would be a shame if a civil rights audit of the platform led to a new mental health crisis among its contractors.


Trump officials weigh encryption crackdown

Eric Geller reports on an idea that, while in very early stages, could set the stage for a major showdown with Silicon Valley:

Senior Trump administration officials met on Wednesday to discuss whether to seek legislation prohibiting tech companies from using forms of encryption that law enforcement can’t break — a provocative step that would reopen a long-running feud between federal authorities and Silicon Valley.

The encryption challenge, which the government calls “going dark,” was the focus of a National Security Council meeting Wednesday morning that included the No. 2 officials from several key agencies, according to three people familiar with the matter.

Virginia’s ‘revenge porn’ laws now officially cover deepfakes

Adi Robertson reports that Virginia has moved with surprising alacrity to criminalize the act of making fake porn about people without their consent:

Virginia has officially expanded its nonconsensual pornography ban to include realistic fake videos and photos, including computer-generated “deepfakes.” The amendment was passed earlier this year and goes into effect today, making Virginia one of the first places with a law covering deepfakes.

Since 2014, Virginia has banned spreading nude images or video “with the intent to coerce, harass, or intimidate” another person. The amendment clarifies that this includes “a falsely created videographic or still image” — which could refer to “deepfakes” video but also Photoshopped images or otherwise faked footage. Violating the rule is a Class 1 misdemeanor, which carries up to 12 months in prison and up to $2,500 in fines.

Inside the Secret Border Patrol Facebook Group Where Agents Joke About Migrant Deaths and Post Sexist Memes

A.C. Thompson writes a disturbing profile of the 3-year-old Facebook group, which has about 9,500 members. Members recently “shared derogatory comments about Latina lawmakers who plan to visit a controversial Texas detention facility on Monday, calling them ‘scum buckets’ and ‘hoes,’” Thompson writes.

Perhaps the most disturbing posts target Ocasio-Cortez. One includes a photo illustration of her engaged in oral sex at an immigrant detention center. Text accompanying the image reads, “Lucky Illegal Immigrant Glory Hole Special Starring AOC.”

Another is a photo illustration of a smiling President Donald Trump forcing Ocasio-Cortez’s head toward his crotch. The agent who posted the image commented: “That’s right bitches. The masses have spoken and today democracy won.”

Big Data Supercharged Gerrymandering. It Could Help Stop It Too

Louise Matsakis investigates technological solutions to gerrymandering — which everyone expects to get worse following the Supreme Court’s ruling on the subject last week:

The good news is that the technology needed to crunch census data and draw district maps has been democratized. It’s now much easier—and cheaper—for journalists, researchers, and civil rights groups to track redistricting, using free or low-cost tools running on a laptop. If one political party tries to redraw the map, citizens can quickly check themselves whether the changes would equate to gerrymandering. “People will know in an instant that this is really screwed,” says Li. “You didn’t have that sort of instant analysis in the past.” He pointed to the proliferation of open source groups studying gerrymandering, like the Princeton Gerrymandering Project.

Twitter Conspiracy Theorist Charged With a Felony in Lynch Threat Against Muslim Candidate

Here’s a rare case where a death threat on Twitter leads to an actual felony charge. Kevin Poulsen reports:

Federal prosecutors in North Carolina have filed criminal charges over an anonymous Twitter post threatening to lynch a Muslim attorney vying for a seat in the Virginia Senate, marking a rare law enforcement action targeting hate speech on social media. 

Joseph Cecil Vandevere, of Buncombe County, North Carolina, was indicted June 20 on a single count of transmitting an interstate threat, a felony. FBI agents identified Vandevere as the author of a March 2018 tweet directed at Qasim Rashid, an attorney and author who this month became the first Muslim to win a Virginia primary race.

Trump Consultant Is Trolling Democrats With Biden Site That Isn’t Biden’s

Matthew Rosenberg finds a good old-fashioned fake political web page on the World Wide Web. Does this page’s existence — and popularity — suggest Facebook is becoming a less attractive place for hoaxes?

For much of the last three months, the most popular Joseph R. Biden Jr. website has been a slick little piece of disinformation that is designed to look like the former vice president’s official campaign page, yet is most definitely not pro-Biden. […]

All the site says about its creator is buried in the fine print at the bottom of the page. The site, it says, is a political parody built and paid for “BY AN American citizen FOR American citizens,” and not the work of any campaign or political action committee.

Operation Tripoli

Check Point Research discovered a multi-year effort to spread malware in Libya:

It seems that the tense political situation in Libya is useful to some, who use it to lure victims into clicking links and downloading files that are supposed to inform about the latest airstrike in the country, or the capturing of terrorists, but instead contain malware. […]

Based on information we shared, Facebook took down the pages and accounts that distributed the malicious artifacts belonging to this operation.


Facebook’s new terms of service explains how ads get targeted

Facebook’s terms of service got a light polishing, Russell Brandom reports:

Facebook has published new terms of service, giving more details on content removals, ad targeting, and users’ intellectual property rights. According to Facebook, the new terms don’t represent a change in how the platform actually operates, but they are aimed at giving users a clearer picture of the platform. The new terms take effect on July 31st.

TikTok’s Videos Are Goofy. Its Strategy to Dominate Social Media Is Serious.

Georgia Wells, Yang Jie, and Yoko Kubota have a great look at Bytedance as it makes inroads into the United States — in part by spending a reported $1 billion on advertising. There’s also this intriguing nugget about its future expansion plans:

Bytedance has expressed interest in buying Snap if the U.S. company gets closer to profitability, people knowledgeable about Bytedance’s plans said. Bytedance has also considered buying Twitter Inc. and Quora, several people said. Snap CEO Evan Spiegel has said he has no interest in selling, and a person familiar with Snap said Bytedance didn’t express its interest to Snap. Twitter and Quora declined to comment.

Memes Are the New Pop Stars: How TikTok Became the Future of the Music Industry

Alyssa Bereznak reports on how TikTok became an effective talent scout for the record industry:

Instead of personal connections, the network thrives on a constant flow of so-called “challenges,” or prompts that encourage disparate users to participate in a momentary trend, be it “eating to a beat” or a dance move called “the woah.” Similar to the “Superman” or the “In My Feelings” dances, challenges capitalize on a cultural moment. But on TikTok, less credence is given to the originator, and users are often rewarded for adding their own personal spin on an existing action.

In short, they’re like any old meme, except the user—not a single static image or video—is the star. And, as is the case with most social networks, most of the “stars” on TikTok are simply young people who have gained massive followings for being good-looking, meme-savvy, prolific, and occasionally charismatic. 

With costs rising, Facebook traffic arbitrage strategies lose effectiveness

Buying traffic from Facebook is less profitable than it used to be, Tim Peterson reports:

Ranker had planned to spend $40 million to promote its content as ads on Facebook in 2018. But the company only ended up spending half of that money, according to Ranker CEO Clark Benson. The reason was that at the beginning of June 2018, Ranker saw that the cost of its promoted, or boosted, posts increased by roughly 30% overnight.

Around that same time, Topix saw a similar issue with its Facebook ad buys. Facebook was not approving its ads as often as it had been previously, which made it difficult to run as many ads on the social network, according to Tolles. “There was a lot more pressure to look at things on an individual basis. So it was harder for us to find something that worked and press the spend,” he said.

Creator of DeepNude, App That Undresses Photos of Women, Takes It Offline


The creator of DeepNude, an app that used a machine learning algorithm to “undress” images of clothed women announced Thursday that he’s killing the software, after viral backlash for the way it objectifies women. 

Gay dating app Jack’d settles complaint over exposing private photos

The parent company of Jack’d will pay $240,000 to settle a complaint from New York’s attorney general after leaving private photos in a public AWS bucket. Adi Robertson reports:

The Register and Ars Technica first reported on the Jack’d security flaw in February of 2019, noting that security researcher Oliver Hough had informed the company a year earlier to no avail. The popular dating app had uploaded photos to a publicly accessible Amazon Web Services storage bucket, even when users believed the pictures were private. The exposed data included nude photos and pictures that revealed a user’s location — potentially putting them at risk of blackmail or even arrest in some countries. Jack’d fixed the problem the day Ars published its story.


Walmart in Mexico launches grocery orders via WhatsApp

Here’s a positive sign for WhatsApp’s effort to transform itself into a commerce app. From Daina Beth Solomon:

A Reuters reporter tried the service on Monday, sending a photo of a handwritten grocery list. A company representative responded immediately, punctuating responses with smiley-face and winky-face emojis.

The representative said Superama charges 49 pesos ($2.55) for delivery within 90 minutes, or 39 pesos ($2.03) for a later delivery time, and would accept payment in cash or by card upon delivery.

Bumble becomes one of the first major dating platforms to introduce in-app video and voice calls

Dating apps typically don’t offer these features for a reason — the reason being that men cannot be trusted to keep their private parts out of the camera frame. So we’ll see how this goes. Ashley Carman:

Bumble’s figured out a new way to bring video to its app: in-app voice or video calling. The feature applies to all of Bumble’s use cases, including Bumble Bizz for making professional connections; Bumble BFF for making friends; and Bumble for dating. The option to start a call will surface only once a match has been made. Women can call from that initial match while men have to wait until a woman has made the first move. The overall appeal is that people don’t have to swap phone numbers to chat, so if they unmatch, the other person loses their ability to make a call.


Libra and taxes

Matt Levine ponders the tax implications of Libra, which are still up in the air:

Sure sure sure if you pay taxes in Europe or the U.S. or wherever, the fluctuating value of Libra will make it an extraordinarily inconvenient way to buy things. But once you start paying taxes to Facebook to fund the costs of world government, Libra will be the natural way to do that, and buying stuff with dollars and euros will start to trigger annoying capital-gains issues on your Facebook tax return.

Am I joking? Sure yes of course, but consider that that tax lawyer is not wrong. Libra is absurd in the current state of the world, with its national governments and currencies and tax regimes. Perhaps that means that Facebook is kidding, or hasn’t thought things through, or is not really planning for Libra to be a meaningful medium of exchange. Or perhaps it means Facebook is deadly serious and has thought things through carefully and has concluded that governments and currencies and tax regimes are problems that can be solved.

And finally ...

Sometimes, seeing one good post on a social network can make you a lifetime customer:

Talk to me

Send me tips, comments, questions, and civil rights violations: