Skip to main content

Facebook’s content moderation efforts face increasing skepticism

Facebook’s content moderation efforts face increasing skepticism

/

Academics doubt that human speech can be regulated at global scale

Share this story

Illustration by Alex Castro / The Verge

It’s been a big week for tales of content moderation. Last Friday, Radiolab posted a fascinating episode about Facebook’s evolving approach to deciding what stays on the platform. The episode used a series of historical debates inside the company to illuminate the many challenges faced by a company seeking to regulate speech globally.

On Thursday, Motherboard’s Jason Koebler and Joseph Cox posted their own history of content moderation on Facebook, but with an eye turned to the future. Drawing on interviews with Facebook executives, current and former moderators, and academic researchers, Koebler and Cox set out to define the scope of the problem and Facebook’s efforts to wrap their arms around it.

It’s a piece that credits Facebook for the thoughtful work it has done on the subject, while expressing a deep skepticism for the project overall. A passage midway through the piece has stayed with me:

Size is the one thing Facebook isn’t willing to give up. And so Facebook’s content moderation team has been given a Sisyphean task: Fix the mess Facebook’s worldview and business model has created, without changing the worldview or business model itself.

“Making their stock-and-trade in soliciting unvetted, god-knows-what content from literally anyone on earth, with whatever agendas, ideological bents, political goals and trying to make that sustainable — it’s actually almost ridiculous when you think about it that way,” Roberts, the UCLA professor, told Motherboard. “What they’re trying to do is to resolve human nature fundamentally.“

Facebook likely wouldn’t say it’s attempting to “resolve” human nature. But it’s true that the company’s efforts all begin from the notion that it can and should connect everyone on earth — despite having little certainty about the consequences of doing so.

The piece draws new attention to the role Sheryl Sandberg has played in content moderation disputes, reporting that Facebook’s chief operating officer does weigh in on difficult calls from time to time. (CEO Mark Zuckerberg does as well.) It examines the difficulty in retaining content moderators, who are tasked with an often terrible job, and — due to an ever-evolving set of standards — must constantly be retrained.

One hate speech presentation obtained by Motherboard has a list of all the recent changes to the slide deck, including additions, removals, and clarifications of certain topics. In some months, Facebook pushed changes to the hate speech training document several times within a window of just a few days. In all, Facebook tweaked the material over 20 times in a five month period. Some policy changes are slight enough to not require any sort of retraining, but other, more nuanced changes need moderators to be retrained on that point. Some individual presentations obtained by Motherboard stretch into the hundreds of slides, stepping through examples and bullet points on why particular pieces of content should be removed.

Mostly, like Radiolab before it, the piece conveys the enormity of the challenge. The reaction online was largely skeptical. (New York’s Max Read: “All these supposed Nietzsche fans joined together in the unifying project of tediously applying a ‘consistent’ set of rules to all discourse and human relations. Best of luck!!!”)

Perhaps future events will complicate Facebook’s project of connecting everyone. But if they don’t, I return to the idea that the company ought to proceed with Zuckerberg’s idea of a Supreme Court for content moderation. Like I said earlier this week: ultimately, the question of what belongs on Facebook can’t be decided solely by the people who work there.

Germany

Yesterday, I promised I’d let you know if the authors of that study on Facebook and refugee violence in Germany got back to me. My question was how they could account for chronology in their study: it seemed impossible, given the data presented, that they could say with certainty that posts on Facebook led to violence, rather than the other way around.

I heard back from one of the authors, Karsten Müller. He made two key points. One, the study takes pains not to say with any certainty that it proves causality. The authors’ exact line is: “The results in this section should be interpreted as purely suggestive and do not allow for causal inference.”

The second point rests on a detailed description of the survey’s elaborate methodology, which relies on developing models of Facebook usage based on interactions on public pages, and then correlating them with instances of anti-refugee violence. Müller told me what allows the authors to make some claims of causation is the study’s use of internet outages to determine when German municipalities had less exposure to Facebook:

As it turns out, we find that the correlation between the interaction of local social media penetration and a measure of anti-refugee sentiments on Facebook on one hand, and hate crimes on the other, appears to vanish in weeks such outages occur. A graph that visualizes these results can be found in the newest version of the paper, available on SSRN (look for “binned scatter plot”). If one wants to claim that social media does not have any propagating effect on hate crimes when tensions are already high (which is what we are measuring), one would need to explain the mediating effect of these outages.

So how might you explain that mediating effect? Tyler Cowen takes a crack at it on his blog:

Even if internet or Facebook outages do have a predictive effect on attacks in some manner, it likely shows that Facebook is a communications medium used to organize gatherings and attacks (as the telephone once might have been), not, as the authors repeatedly suggest, that Facebook is somehow generating and whipping up and controlling racist sentiment over time. Again, compare such a possibility to the broader literature. There is good evidence that anti-semitic violence across German regions is fairly persistent, with pogroms during the Black Death predicting synagogue attacks during the Nazi time. And we are supposed to believe that racist feelings dwindle into passivity simply because the thugs cannot access Facebook for a few days or maybe a week? 

Cowen’s post is still worth reading in full. As I mentioned yesterday, much of the confusion around the German study stems from the fact that most of the relevant data is unavailable to us. Anonymized data, shared securely with well vetted academics, would likely bring us much closer to the truth.

Democracy

Google deletes accounts with ties to Iran on YouTube and other sites

The Iranian influence campaign unearthed by FireEye and Facebook had planks on YouTube and Blogger as well, Tony Romm reports:

Google announced Thursday that it deleted 58 accounts with ties to Iran on its video platform YouTube and its other sites, the latest sign that foreign agents from around the world increasingly seek to spread disinformation on a broad array of popular websites.

The new removals targeted 39 channels on YouTube, which had more than 13,000 views in the United States, as well as 13 accounts on the social networking site Google Plus and six accounts on Blogger, its blogging platform, the company said. Kent Walker, Google’s senior vice president of global affairs, said in a blog postthat each of the accounts had ties to the Islamic Republic of Iran Broadcasting, or IRIB, which is tied to Iran’s ayatollah, and that they “disguised their connection to this effort.”

How FireEye Helped Facebook Spot a Disinformation Campaign

Kate Conger and Sheera Frenkel walk us through how FireEye found the Iranian operation:

“It started with a single social media account or a small set of accounts that were pushing this political-themed content that didn’t necessarily seem in line with the personas that the accounts had adopted,” said Mr. Foster. Many of the fake accounts, which sprawled across Facebook, Instagram, Twitter and Reddit, shared content from Liberty Front Press.

Over two months, Mr. Foster and a small group of analysts mapped the connections between the accounts and unearthed more of them.

Attempted Hacking of Voter Database Was a False Alarm, Democratic Party Says

On Wednesday I brought you news that the Democratic National Committee was potentially a hacking target. It turns out that the Michigan Democratic Party had hired hackers to simulate an attack.

Fake news war gets sophisticated before 2018 midterm elections

Sara Fischer examines some of the ways that misinformation is evolving ahead of the midterms:

More sophisticated bot tools: New tools are being created to manipulate information at the blog or comment level on everyday websites, says Mike Marriott, researcher at Digital Shadows, a digital security firm.

In a new report, Marriott explains that tools such as BotMasterLabs and ZennoStore claim to promote content across hundreds of thousands of platforms, including forums, blogs, and bulletin boards, but in reality, they control large numbers of bots that are programmed to post on specific types of forums on different topics.

Memphis police used fake Facebook account to monitor Black Lives Matter, trial reveals

It’s not just Russians and Iranians engaging in “coordinated inauthentic behavior” — it’s now also a strategy for police officers, Antonia Noori Fazan reports:

Bob Smith said he lived in Oxford, Miss. On Facebook, he “liked” pages for Black Lives Matter, Sen. Bernie Sanders (I-Vt.), Memphis Voices For Palestine, Mid-South Peace and Justice, and comedian Rickey Smiley, according to images obtained by The Appeal. “I’m not a cop,” he wrote in a private Facebook message to one activist, adding that he would be interested in attending protests in the Memphis area, but it was a bit of a drive. In lieu of a profile picture, he uploaded an illustration of a Guy Fawkes mask.

That’s because “Bob Smith” wasn’t a person of color, as he had claimed online. On Monday, Sgt. Timothy Reynolds, a white detective with the Memphis Police Department’s Office of Homeland Security, testified in federal court that he had created the account and friended hundreds of activists, according to the Commercial Appeal.

Facebook Bans Quiz App That Captured Data of Four Million Users

Deepa Seetharaman gets a statement from the guy whose personality quiz app got banned by Facebook on Wednesday:

One of the researchers behind the app, David Stillwell, called the ban “nonsensical and purely for PR reasons.” Mr. Stillwell said the findings from the myPersonality app were used to publish several social-science research papers in recent years and that he and his research partner were invited to Facebook’s offices in 2011 and 2015 to discuss their work.

“It is therefore odd that Facebook should suddenly now profess itself to have been unaware of the myPersonality research and to believe that the data may have been misused,” Mr. Stillwell said in a statement.

Russian Trolls Are Spreading Confusion About Vaccine Safety On Twitter

Here’s a story from Azeen Ghorayshi that takes Twitter’s constant talk of “platform health” and makes it literal:

Accounts run by the Russian government-backed Internet Research Agency tweeted about vaccines roughly 22 times more frequently than the average Twitter user, the study found. The tweets fell roughly equally into pro-vaccine and anti-vaccine categories.

“They don’t seem to have a particular agenda concerning vaccines — rather they seem to have a desire to boost both sides of the debate,” said David Broniatowski, assistant professor of engineering at George Washington University and lead author of the study. “That’s consistent with this idea of spreading discord.”

Tech Giants Are Becoming Defenders of Democracy. Now What?

Issie Lapowsky asks why it appears that tech companies are doing more to protect the nation from cyberattacks than the government:

The Department of Justice has issued scathing indictments of Russian hackers and trolls this year, but without international jurisdiction they’re largely symbolic. The White House axed its top cyber policy position following the departure of former cybersecurity czar Tom Bossert in April. The Global Engagement Center, a State Department initiative that was directed to counter Russian propaganda, has been starved for resources for much of the past year. And it’s anyone’s guess who, exactly, is responsible for making sure that information gets shared with the right people across the public and private sector.

“Every agency is off doing its own thing. No one is in charge,” says Brett Bruen, former White House director of global engagement under President Obama. “We continue to have a very siloed process within the government, let alone bringing the private sector to the table to try to figure this out together.”

Suspect who fatally stabbed black man in Pennsylvania ‘liked’ nearly 50 racist alt-right Facebook pages

The suspect in the killing of a black man in Pennsylvania was steeped in white nationalism on Facebook, reports Alex Amend:

On Facebook, Rocco “liked” nearly 50 pages that traffic in memes and slang favored by the alt-right and the broader white nationalist movement.

Rocco subscribed to pages like “Alt-Right Meme Magic,” “Smash Cultural Marxism,” and “Lazer-Beamed Memes with Fashy Themes,” as well as a scattering of motivational speakers and other pages dedicated to body building. Rocco subscribed to the page of Identity Dixie, an SPLC-designated neo-Confederate hate group.

Elsewhere

Apple removes Facebook Onavo app from App Store

Onavo makes a VPN app that users can download to browse the web more privately. But Facebook, which bought Onavo in 2013, has used it as an early-warning system to see which competitive apps might be gaining traction. (A former Facebooker told me the scariest apps grew “low and slow” — small in absolute numbers, but steadily, and while retaining their existing user base.) Now Apple, which has made patting itself on the back over privacy issues a key corporate messaging strategy, has pressured Facebook to take Onavo out of the App Store. It will remain on Android, but it’s still a blow to Facebook.

Facebook poaches new CMO Antonio Lucio from HP

Facebook has a new head of marketing to replace Gary Briggs.

Facebook business executive Dan Rose leaves

As Peter Kafka notes here, this is the second big-deal Facebook departure this summer, after head of policy and communications Elliot Schrage. Rose, who joined Facebook in 2016, oversaw business development and led the company’s acquisition of Instagram, among other things.

The News Literacy Project is teaching kids to stop fake news

Mark Sullivan and Tim Bajarin profile the News Literacy Project:

The News Literacy Project, an education program aimed at helping young people distinguish real news from fake news in the age of weaponized social media, attacks the fake news problem at the consumer level. The Washington, D.C.-based nonprofit says that since the 2016 elections, it’s been fielding a surge in demand from teachers across the world. Recently, it received a $1 million grant from Facebook to help expand its curricula. By helping kids hone their own bullshit detectors, the NLP, and projects like it, may offer our best hope against fake news.

Venmo Considers Making it Harder to See What Other People Are Buying

Venmo is finally, kind of, maybe getting rid of the public API that lets anyone scrape Venmo purchases to embarrass people online, reports Julie Verhage:

The debate inside PayPal was sparked by concerns over the privacy of its users. This summer, a researcher drew attention to Venmo’s privacy settings, which default to public, with her analysis of more than 200 million transactions on the platform. PayPal has said it gives users the option to only share with friends or with the recipient and can adjust this for each transaction.

Launches

Wickr has a new plan for dodging internet blocks

The encrypted chat app is partnering with a company named Psiphon to obscure the origins of traffic, my colleague Russell Brandom reports. As more countries seek to ban chat apps, this kind of thing could become more likely.

NewsGuard Fights Fake News With Humans, Not Algorithms

Steven Brill’s fake-news-fighting company has a Chrome extension that rates news sites on trustworthiness, with input from its own reporters. Issie Lapowsky:

To vet the sites, they use a checklist of nine criteria that typically denote trustworthiness. Sites that don’t clearly label advertising lose points, for example. Sites that have a coherent correction policy gain points. If you install NewsGuard and browse Google, Bing, Facebook, or Twitter, you’ll see either a red or green icon next to every news source, a binary indicator of whether it meets NewsGuard’s standards. Hover over the icon, and NewsGuard offers a full “nutrition label,” with point-by-point descriptions of how it scored the site, and links to the bios of whoever scored them.

The tool is designed to maximize transparency, says Steve Brill, NewsGuard’s cofounder, best known for founding the cable company Court TV. “We’re trying to be the opposite of an algorithm,” he says. Brill started NewsGuard with Gordon Crovitz, former publisher of The Wall Street Journal.

Takes

The NYTimes shouldn’t have relied so heavily on that Facebook and anti-refugee study.

Felix Salmon has more cold water to throw on the study of Facebook in Germany:

With hindsight, the Times should have avoided terms like “landmark” and “breathtaking,” and should probably have avoided mentioning specific results at all. The white paper is intriguing, and it was a great idea to use it as a jumping-off point for the newspaper’s shoe-leather reporting. The study was not, however, something to cite as a significant scientific advance. Facebook deliberately makes it extremely difficult for external researchers to quantify its effects on society, which means the best we can hope for is to piece together a jigsaw puzzle of suggestive evidence. (If the company would just make its data available, we’d stop being forced to estimate via imperfect Nutella-proxies.) But as things currently stand, no one piece of research is going to be the kind of smoking gun that the Times tries to turn this one into. 

Can Facebook, or Anybody, Solve the Internet’s Misinformation Problem?

Farhad Manjoo is pessimistic about platforms’ efforts to curb misinformation:

Consider the most pressing question: How confident should you be that the coming midterm elections will be safe from hacking and propaganda operations online? The most likely answer: Nobody knows for sure, but probably not very confident.

WhatsApp has a fake news problem—that can be fixed without breaking encryption

In a clever essay, Himanshu Gupta and Harsh Taneja argue that WhatsApp could moderate the spread of hoaxes on WhatsApp by identifying them at the metadata level. (Gupta formerly worked at WhatsApp rival WeChat.)

Therefore, even if WhatsApp can’t actually read the contents of a message, it can access the unique cryptographic hash of that message (which it uses to enable instant forwarding), the time the message was sent, and other metadata. It can also potentially determine who sent a particular file to whom. In short, it can track a message’s journey on its platform (and thereby, fake news) and identify the originator of that message.

If WhatsApp can identify a particular a message’s metadata precisely, it can tag that message as “fake news” after appropriate content moderation. It can be argued that WhatsApp can also, with some tweaks to its algorithm, identify the original sender of a fake news image, video, or text and potentially also stop that content from further spreading on its network.

I Want To Log Off

Steve Rousseau says social apps have changed the nature of friendship, in ways that make him feel depressed:

It’s increasingly feeling like if I’m not participating in the unending online conversation, I’m not participating in my friendships anymore. What started out as a group chat to organize bike rides is now a meme dumping ground interspersed with “we should all meet up and get beers some time.” (We’ve yet to meet up and get beers some time).

It’s not that the nature of friendship has changed, it’s that the internet made us believe that all of this was necessary and good.

And finally ...

LinkedIn Message Generator

Have you ever gotten a terrible recruiter pitch from LinkedIn? Now you can make your own, thanks to this amazing / terrible tool from Andrew Duberstein. Here was mine:

Hi Casey,

Super-pumped to meet you! My employer DeepCube, a de-centralized Enron, has just raised 103 DogeCoin to design the future of maritime piracy.

Amazed by your knowledge of A/B testing and hypergrowth, I think you’d be a great fit for our In-House Tea Specialist. Let’s grab coffee to discuss—how’s Thursday?

Have a good one!

Drew

How about never, Drew.

Talk to me

Send me tips, comments, or questions: casey@theverge.com. Or be like T., the nice young man who approached me at Blue Bottle in San Francisco on Thursday to introduce himself and discuss the past week of this very newsletter. Meeting newsletter readers in real life has been one of the great joys of writing The Interface. Say hi sometime!