Skip to main content

How to think about polarization on Facebook

How to think about polarization on Facebook

/

A fuzzy term and a new investigation reveal a tension at the heart of the company’s efforts to clean up the platform

Share this story

facebook stock art
Illustration by Alex Castro / The Verge

I.

On Tuesday, the Wall Street Journal published a report about Facebook’s efforts to fight polarization since 2016, based on internal documents and interviews with current and former employees. Rich with detail, the report describes how Facebook researched ways to reduce the spread of divisive content on the platform, and in many cases set aside the recommendations of employees working on the problem. Here are Jeff Horwitz and Deepa Seetharaman:

“Our algorithms exploit the human brain’s attraction to divisiveness,” read a slide from a 2018 presentation. “If left unchecked,” it warned, Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.” [...]

Fixing the polarization problem would be difficult, requiring Facebook to rethink some of its core products. Most notably, the project forced Facebook to consider how it prioritized “user engagement”—a metric involving time spent, likes, shares and comments that for years had been the lodestar of its system.

The first thing to say is that “polarization” can mean a lot of things, and that can make the discussion about Facebook’s contribution to the problem difficult. You can use it in a narrow sense to talk about the way that a news feed full of partisan sentiment could divide the country. But you could also use it as an umbrella term to talk about initiatives related to what Facebook and other social networks have lately taken to calling “platform integrity” — removing hate speech, for example, or labeling misinformation.

The second thing to say about “polarization” is that while it has a lot of negative effects, it’s worth thinking about what your proposed alternative to it would be. Is it national unity? One-party rule? Or just everyone being more polite to one another? The question gets at the challenge of “fighting” polarization if you’re a tech company CEO: even if you see it as an enemy, it’s not clear what metric you would rally your company around to deal with it.

Anyway, Facebook reacted to the Journal report with significant frustration. Guy Rosen, who oversees these efforts, published a blog post on Wednesday laying out some of the steps the company has taken since 2016 to fight “polarization” — here used in that umbrella-term sense of the word. The steps include shifting the News Feed to include more posts from friends and family than publishers; starting a fact-checking program; more rapidly detecting hate speech and other malicious content using machine-learning systems and an expanded content moderation workforce; and removing groups that violate Facebook policies from algorithmic recommendations.

Rosen writes:

We’ve taken a number of important steps to reduce the amount of content that could drive polarization on our platform, sometimes at the expense of revenues. This job won’t ever be complete because at the end of the day, online discourse is an extension of society and ours is highly polarized. But it is our job to reduce polarization’s impact on how people experience our products. We are committed to doing just that.  

Among the reasons the company was frustrated with the story, according to an internal Workplace post I saw, is that Facebook had spent “several months” talking with the Journal reporters about their findings. The company gave them a variety of executives to speak with on and off the record, including Joel Kaplan, its vice president of global public policy, who often pops up in stories like this to complain that some action might disproportionately hurt conservatives.

In any case, there are two things I think are worth mentioning about this story and Facebook’s response to it. One is an internal tension in the way Facebook thinks about polarization. And the other is my worry that asking Facebook to solve for divisiveness could distract from the related but distinct issues around the viral promotion of conspiracies, misinformation, and hate speech.

First, that internal tension. On one hand, the initiatives Rosen describes to fight polarization are all real. Facebook has invested significantly in platform integrity over the past several years. And, as some Facebook employees told me yesterday, there are good reasons not to implement every suggestion a team brings you. Some might be less effective than other efforts that were implemented, for example, or they might have unintended negative consequences. Clearly some employees on the team feel like most of their ideas weren’t used, or were watered down, including employees I’ve spoken with myself over the years. But that’s true of a lot of teams at a lot of companies, and it doesn’t mean that all their efforts were for nought.

On the other hand, Facebook executives largely reject the idea that the platform is polarizing in the tearing-the-country-apart sense of the word. The C-suite read closely a working paper that my colleague Ezra Klein wrote about earlier this year that casts doubt on social networks’ contribution to the problem. The paper by Levi Boxell, Matthew Gentzkow, and Jesse Shapiro studies what is known as “affective polarization,” which Klein defines as “the difference between how warmly people view the political party they favor and the political party they oppose.” They found that affective polarization had increased faster in the United States than anywhere else — but that in several large, modernized nations with high internet usage, polarization was actually decreasing. Klein wrote:

One theory this lets us reject is that polarization is a byproduct of internet penetration or digital media usage. Internet usage has risen fastest in countries with falling polarization, and much of the run-up in US polarization predates digital media and is concentrated among older populations with more analogue news habits.

Klein, who published a book on the subject this year, believes that social networks contribute to polarization in other ways. But the fact that there are many large countries where Facebook usage is high and polarization is decreasing helps to explain why the issue is not top of mind for Facebook’s C-suite. As does Mark Zuckerberg’s own stated inclination against platforms making editorial judgments on speech. (Which he reiterated at a virtual shareholders’ meeting today.)

So here you have a case where Facebook can be “right” in a platform integrity sense — look at all these anti-polarization initiatives! — while the Journal is right in a larger one: Facebook has been designed as a place for open discussion, and human nature ensures that those discussions will often be heated and polarizing, and the company has chosen to take a relatively light touch in managing the debates. And it does so because executives think the world benefits from raucous, few-holds-barred discussions, and because they aren’t persuaded that those discussions are tearing countries apart.

Where Facebook can’t wriggle off the hook, I think, is in the Journal’s revelation of just how important its algorithmic choices have been in the spread of polarizing speech. Again, here the problem isn’t “polarization” in the abstract — but in concrete harms related to anti-science, conspiracy, and hate groups, which grow using Facebook’s tools. The company often suggests that its embrace of free speech has created a neutral platform, when in fact its design choices often reward division with greater distribution.

This is the part of the Journal’s report that I found most compelling:

The high number of extremist groups was concerning, the presentation says. Worse was Facebook’s realization that its algorithms were responsible for their growth. The 2016 presentation states that “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms: “Our recommendation systems grow the problem.”

Facebook says that extremist groups are no longer recommended. But just today, the disinformation researcher Nina Jankowicz joined an “alternative health” group on Facebook and immediately saw recommendations that she join other groups related to white supremacy, anti-vaccine activism, and QAnon.

Ultimately, despite its efforts so far, Facebook continues to unwittingly recruit followers for bad actors, who use it to spread hate speech and misinformation detrimental to the public health. The good news is that the company has teams working on those problems, and surely will develop new solutions over time. The question raised by the Journal is, when that happens, how closely their bosses will listen to them.

II.

On Tuesday, Twitter added a link to two of President Trump’s tweets , designating them as “potentially misleading.” It took this action because Trump, as part of a disinformation campaign alleging that voting by mail will trigger massive vote fraud, was appearing to interfere with the democratic process in violation of the company’s policies.

Trump was outraged about the links, and tweeted about being censored to his 80 million followers. He threatened to shut down social media companies. He said “big action” would follow. At the direction of a White House spokeswoman, right-wing trolls began to harass Yoel Roth, Twitter’s head of site integrity, who has previously tweeted criticism of Trump. Members of Congress including Marco Rubio and Josh Hawley tweeted that Twitter’s action could not stand, and that social platforms should lose Section 230 protections for moderating speech — willfully misunderstanding Section 230 in the way that they always do. Late in the day, there was word of a forthcoming executive order, with no other details.

I could spend a lot of time here speculating about the coming battle between social networks and the Republican establishment, with Silicon Valley’s struggling efforts to moderate their unwieldy platforms going head-to-head with Republicans’ bad-faith attempts to portray them as politically biased. But the past few years have taught us that while Congress is happy to kick and scream about the failures of tech platforms, it remains loath to actually regulate them.

It’s true that we have seen some apparent retaliation from Trump against social networks — the strange fair housing suit filed against Facebook last year comes to mind. And several antitrust cases are currently underway that could result in significant action. But for the most part, as Makena Kelly writes today in The Verge, the bluster is as far as it ever really goes:

The president has never followed through on his threats and used his considerable powers to place legal limits on how these companies operate. His fights with the tech companies last just long enough to generate headlines, but flame out before they can make a meaningful policy impact. And despite the wave of conservative anger currently raining down on Twitter, there’s no reason to think this one will be any different.

Those flameouts are most tangible in the courts. On the same day as Trump’s tweets, the US Court of Appeals in Washington ruled against the nonprofit group Freedom Watch and fringe right figure Laura Loomer in a case purporting that Facebook, Google, and Twitter conspired to suppress conservative content online, according to BloombergWhether it be Loomer or Rep. Tulsi Gabbard (D-HI) fighting the bias battle, the courts have yet to rule in their favor.

In fact, as former Twitter spokesman Nu Wexler noted, Trump has even less leverage over Twitter than he does over other tech companies. “Twitter don’t sell political ads, they’re not big enough for an antitrust threat, and he’s clearly hooked on the platform,” Wexler tweeted. And whatever Trump may think, as the law professor Kate Klonick noted, “The First Amendment protects Twitter from Trump. The First Amendment doesn’t protect Trump from Twitter.”

Facts and logic aside, get ready: you’re about to hear a lot more cries from people complaining that they have been censored by Twitter. And it will be all over Twitter.

The Ratio

Today in news that could affect public perception of the big tech platforms.

🔃 Trending sideways: YouTube began fixing an error in its moderation system that caused comments containing certain Chinese-language phrases critical of China’s Communist Party to be automatically deleted. The company still won’t explain what caused the deletions in the first place, though some are speculating that Chinese trolls trained the YouTube algorithm to block the terms. (James Vincent / The Verge)

🔽 Trending down: Harry Sentoso, a warehouse worker in Irvine who was part of Amazon’s COVID-19 hiring spree, died after two weeks on the job. Sentoso was presumed to have the novel coronavirus after his wife tested positive. (Sam Dean / Los Angeles Times)

Virus tracker

Total cases in the US: More than 1,701,500 

Total deaths in the US: At least 100,000

Reported cases in California: 100,371

Total test results (positive and negative) in California: 1,696,396

Reported cases in New York: 369,801

Total test results (positive and negative) in New York: 1,774,128

Reported cases in New Jersey: 156,628

Total test results (positive and negative) in New Jersey: 635,892

Reported cases in Illinois: 114,448

Total test results (positive and negative) in Illinois: 786,794

Data from The New York Times. Test data from The COVID Tracking Project.

Governing

Whistleblowers say Facebook failed to warn investors about illegal activity happening on its platform. A complaint filed with the Securities and Exchange Commission late Tuesday includes dozens of pages of screenshots of opioids and other drugs for sale on Facebook and Instagram, reports Nitasha Tiku at The Washington Post:

The filing is part of a campaign by the National Whistleblower Center to hold Facebook accountable for unchecked criminal activity on its properties. By petitioning the SEC, the consortium is attempting to get around a bedrock law — Section 230 of the Communications and Decency Act — that exempts Internet companies from liability for the user-generated content on their platform.

Instead, the complaint focuses on federal securities law, arguing that Facebook’s failure to tell shareholders about the extent of illegal activity on its platform is a violation of its fiduciary duty. If Facebook alienates advertisers and has to shoulder the true cost of scrubbing criminals from its social networks, it could affect investors in the company, the complaint argues.

Facebook ran a multi-year charm offensive to develop friendly relationships with powerful state prosecutors who could use their investigative powers to harm the company’s revenue growth. In the end, the strategy had mixed results: Most of those attorneys general are now investigating the company for possible antitrust violations. I never cease to be amazed how ineffective tech lobbying is, given the money that gets spent on it (Naomi Nix / Bloomberg)

A federal appeals court rejected claims that Twitter, Facebook, Apple, and Google conspired to suppress conservative views online. The decision affirmed the dismissal of a lawsuit by the nonprofit group Freedom Watch and the right-wing YouTube personality Laura Loomer, who accused the companies of violating antitrust laws and the First Amendment in a coordinated political plot. (Erik Larson / Bloomberg)

The Arizona attorney general sued Google for allegedly tracking users’ locations without permission. The case appears to hinge on whether Android menus were too confusing for the average person to navigate. (Tony Romm / Washington Post)

India’s antitrust body is looking into allegations that Google abused its market position to unfairly promote its mobile payments app. The complaint alleges Google hurt competition by prominently displaying Google Pay inside the Android app store in India. (Aditya Kalra and Aditi Shah / Reuters)

Google sent 1,755 warnings to users whose accounts were targets of government-backed attackers last month. The company highlighted new activity from “hack-for-hire” firms, many based in India, that have been creating Gmail accounts spoofing the World Health Organization. (Google)

Switzerland is now piloting a COVID-19 contact tracing app that uses the Apple-Google framework. The app, SwissCovid, is the first to put the Apple-Google model to use. (Christine Fisher / Engadget)

Silicon Valley’s billionaire Democrats are spending tens of millions of dollars to help Joe Biden catch up to President Trump’s lead on digital campaigning. These billionaires’ arsenals are funding everything from nerdy political science experiments to divisive partisan news sites to rivalrous attempts to overhaul the party’s beleaguered data file. (Theodore Schleifer / Recode)

A war has broken out on Reddit regarding how content is moderated. The feud started when a list of “PowerMods” began circulating, with the title “92 of top 500 subreddits are controlled by just 4 people.” (David Pierce / Protocol)

Twitter’s anti-porn filters blocked the name of Boris Johnson’s chief adviser, Dominic Cummings, from trending on the platform. Cummings has dominated British news for almost a week after coming under fire for traveling across the country during the coronavirus lockdown. It’s nice to read a truly funny story about content moderation for a change. (Alex Hern / The Guardian)

Industry

TikTok’s parent ByteDance generated more than $3 billion of net profit last year. The company’s revenue more than doubled from the year before, to $17 billion, propelled by high growth in user traffic. Here are Katie Roof and Zheping Huang at Bloomberg:

The company owes much of its success to TikTok, now the online repository of choice for lip-synching and dance videos by American teens. The ambitious company is also pushing aggressively into a plethora of new arenas from gaming and search to music. ByteDance could fetch a valuation of between $150 billion and $180 billion in an initial public offering, a premium relative to sales of as much as 20% to social media giant Tencent thanks to a larger global footprint and burgeoning games business, estimated Ke Yan, Singapore-based analyst with DZT Research.

Facebook’s experimental app division has a new product out today called Collab. The app lets users create short music videos using other people’s posts, which sounds a lot like TikTok. (Nick Statt / The Verge)

Facebook’s annual shareholder meeting was held virtually on Wednesday. One item on the agenda was a call for Mark Zuckerberg to relinquish his position as chair of Facebook’s board of directors, and be replaced by an independent figure. Somehow it failed! (Rob Price / Business Insider)

Instagram will start sharing revenue with creators for the first time, through ads in IGTV and badges that viewers can purchase on Instagram Live. The company has hinted that ads would come to IGTV for more than a year, often saying the long-form video offering would be the most likely place it’d first pay creators. Any time creators are develop a direct relationship with their audience and profit from it, I get super happy. (Ashley Carman / The Verge)

Google is rolling out a series of updates aimed at helping local businesses adapt to the COVID-19 pandemic. The company is expanding a product that allows businesses to sell gift cards during the government shutdown. It’s also allowing restaurants to point to their preferred delivery partners for customers that want to order through third-party apps. (Sarah Perez / TechCrunch)

About half of remote workers in the US report feeling less connected to their company, more stressed in ways that negatively impact their work, and say they are working more hours from home. The downsides could become prominent as more companies extend remote working deadlines beyond the coronavirus pandemic. (Kim Hart / Axios)

Things to do

Stuff to occupy you online during the quarantine.

Watch HBO Max. It’s here, and it’s totally diluting the HBO brand!

Turn your Fuji camera into a high-end webcam with this new software. It works over USB.

Subscribe to a new newsletter from Google walkout organizer Claire Stapleton. Tech Support promises to offer “existential advice for today’s tech worker.”

Replace your Zoom calls with a Sims-style virtual hangout. It’s a new twist on video chat from a company called Teeoh.

Call 1-775-HOT-VINE to hear audio clips of famous Vines. I just did and it was extremely charming.

Those good tweets ...

Talk to us

Send us tips, comments, questions, and polarizing Facebook posts: casey@theverge.com and zoe@theverge.com.