Skip to main content

Facebook’s revised political advertising policy doubles down on division

Facebook’s revised political advertising policy doubles down on division


Targeted advertisements will remain on the platform — and the polarization of the electorate is likely to accelerate

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

A picture of a person looking at Facebook on a phone.
Photo by Amelia Holowaty Krales

In October, Facebook made the controversial decision to exempt most political ads from fact-checking. The announcement met with a swift backlash, particularly among leading Democratic candidates for president. As criticism mounted, Facebook began to hint that it would further refine its policy to address lawmakers’ concerns. One change that seemed likely was to limit the ability of candidates to use the company’s sophisticated targeting tools, particularly after hundreds of employees wrote an open letter to Mark Zuckerberg asking for it.

On Thursday, Facebook unveiled the refinements to its policy that it had been promising. But restrictions on targeting were nowhere to be found. Instead, the company doubled down on its current policy, and said the only major change in 2020 would be to allow users to see “fewer” ads. (Fewer than what? It didn’t say.) Here’s Rob Leathern, the company’s director of product management for ads, in the blog post:

There has been much debate in recent months about political advertising online and the different approaches that companies have chosen to take. While Twitter has chosen to block political ads and Google has chosen to limit the targeting of political ads, we are choosing to expand transparency and give more controls to people when it comes to political ads. [...]

We recognize this is an issue that has provoked much public discussion — including much criticism of Facebook’s position. We are not deaf to that and will continue to work with regulators and policy makers in our ongoing efforts to help protect elections.

The move is rooted in ideas of personal responsibility — if you want to see fewer political ads and remove yourself from campaigns, that’s on you. In practice, though, it seems unlikely that many Facebook users would take advantage of the semi-opt-out, which is due to be released sometime before April. When’s the last time you visited your ad preferences dashboard?

Among the commentators I follow, condemnation of Facebook’s move was more or less universal. Elizabeth Warren hated it (and took a dig at the Teen Vogue imbroglio while she was at it.) Joe Biden hated it. Ellen Weintraub of the Federal Election Commission hated it. Barbra Streisand hated it. And the list goes on.

Republicans, who the conventional wisdom holds will benefit most from the move, were largely silent on the decision. (Ben Shapiro was a minor exception; and here’s a Washington Post columnist who likes the policy,) Still, it’s safe to assume that President Donald Trump, whose campaign made great use of targeting capabilities during the 2016 election, would have raged had Facebook taken those tools away. And given that Facebook is the subject of at least four ongoing federal investigations, it wouldn’t be surprising if the company developed this policy with appeasement in mind.

At the same time, Republicans aren’t the sole beneficiary of Facebook’s announcement. As Leathern noted, the Democratic National Committee opposed the elimination of targeting tools. There is also some evidence that Facebook tools have prompted more candidates overall to buy ads, increasing the amount of paid political discussion generally.

I’ve come around to the idea that microtargeting ought to be banned, because it accelerates the polarization and tribalism that are transforming the country. Let politicians craft divisive messages to ever-smaller splinters of the populace and they probably will. The media will write about the most egregious examples of misinformation and hypocrisy that this practice enables, but it seems likely that much of it will go unchallenged. Meanwhile, sorting fact from fiction will become even harder for the average voter. The negatives here seem to far outweigh any benefits.

Andrew “Boz” Bosworth, a top Facebook executive who ran the ad platform during the 2016 election, called polarization “the real disaster” in an internal post made public this week by the New York Times. Bosworth wrote:

What happens when you see 26% more content from people you don’t agree with? Does it help you empathize with them as everyone has been suggesting? Nope. It makes you dislike them even more. This is also easy to prove with a thought experiment: whatever your political leaning, think of a publication from the other side that you despise. When you read an article from that outlet, perhaps shared by an uncle or nephew, does it make you rethink your values? Or does it make you retreat further into the conviction of your own correctness? If you answered the former, congratulations you are a better person than I am. Every time I read something from Breitbart I get 10% more liberal.

A world in which politicians are able to advertise only to large groups of people — as they do on broadcast television, for example — is one in which they have incentives to promote more unifying messages. But if they can slice and dice the electorate however they like, those incentives are much weaker.

Meanwhile, misleading political ads will continue to go viral, prompting a fresh news cycle whenever a candidate’s lie crosses a few hundreds thousand impressions. In each case, calls for Facebook to revisit its policies will be renewed, and the beleaguered PR team will dig up old quotes from Leathern’s post and email them to reporters by way of explanation.

And make no mistake: Facebook executives already know all this, and have decided that it beats the alternative. The company is committing to 11 full months of getting kicked in the teeth. It may well be the company’s smartest move politically. But it would seem to augur very poorly for our politics.

The Ratio

Today in news that could affect public perception of the big tech platforms.

🔼 Trending up: Microsoft released a new tool that scans online chats for people seeking to sexually exploit children. It’s part of a broader push by the tech industry to crack down on the dangers facing kids online, amid pressure from lawmakers.

🔽 Trending down: Anti-vaxxers continue to circumvent Facebook’s ban against ads that contain vaccine misinformation. “Facebook does not have a policy that bans advertising on the basis that it expresses opposition to vaccines,” a Facebook spokesperson said. OK!


House lawmakers introduced a new bill that would give parents the right to delete data that companies have collected about their children and extend the Children’s Online Privacy Protection Act to older minorsThe Verge’s Makena Kelly explains the significance:

The bill would make big updates to the law that’s already brought enormous changes to YouTube and TikTok and infuriated creators. In its settlement with YouTube, the FTC fined the company over $170 million and prohibited the company from running targeted ads on videos the agency could deem child-friendly. Many critics argued that this settlement didn’t go far enough, and if the PROTECT Kids Act was approved, YouTube and other online platforms would be under a lot more pressure than they already are to ensure children’s data remains safe online.

Under current law, COPPA only prohibits platforms from collecting the data of children under the age of 13. Under the PROTECT Kids Act, that age would be increased to 16. COPPA also doesn’t include precise geolocation and biometric information as part of its definition of “personal information.” This House bill would ban platforms from collecting those sensitive pieces of information from children as well. And if a parent wanted to remove their children’s data from a website, the company would have to provide some kind of delete feature for them to use.

Here are 10 things tech platforms can do to create more election security before November. The list, which includes contributions from Facebook co-founder Chris Hughes, offers a refreshingly concrete take on the ongoing debate over big tech and election manipulation. (John Borthwick / Medium)

Iranian teenagers are defacing US websites in protest of the Trump administration killing Soleimani. Some of the hackers say they do not work for the Iranian government. (Kevin Collier / The Verge)

A pro-Iran Instagram campaign targeted the Trump family after the funeral of Iranian general Qassem Soleimani. The campaign consisted of tagging the president’s family, especially Ivanka and Melania, in images ranging from the Iranian flag to a beheaded Donald Trump. (Jane Lytvynenko and Jeremy Singer-Vine / BuzzFeed)

Android users in the EU will soon be able to choose their default search engine from a list of four options, including Google, when setting up their new phones or tablets. The changes follow a $5 billion fine from EU regulators that found Google had used its mobile operating system to hurt rivals. (Lauren Feiner / CNBC)

Many politicians have been hesitant to create profiles on TikTok, the video looping app plagued by national security concerns. The vacuum has allowed impersonators to roam free. The problem is compounded by the fact that TikTok lacks a robust verification system, which makes identifying and taking down such accounts difficult. (Maria Jose Valero and Yueqi Yang / Bloomberg)

Reddit updated its impersonation policy ahead of the 2020 election. The new policy covers fake articles misleadingly attributed to real journalists, forged election communications purporting to come from real government agencies, and scammy domains posing as those of a particular news outlet or politician.

YouTube’s algorithm isn’t the only thing responsible for making the platform a far-right propaganda machine, researcher Becca Lewis argues. The company’s celebrity culture and community dynamics play a major role in the amplification of far-right content. (Becca Lewis / Medium)

A judge in Brazil ruled that a film made by a YouTube comedy group that depicts Jesus as gay must be temporarily removed from Netflix. Two million people signed a petition calling for the movie to be axed, and the production company was attacked with Molotov cocktails last month. And you thought Richard Jewell got bad reviews. (BBC)


Mark Zuckerberg is giving up on annual personal challenges. Instead, he wrote a more thematic list of goals for the next decade, which include a new private social platform, a decentralized payments platform, and new forms of community governance. Here’s how he framed the pivot:

This decade I’m going to take a longer term focus. Rather than having year-to-year challenges, I’ve tried to think about what I hope the world and my life will look in 2030 so I can make sure I’m focusing on those things. By then, if things go well, my daughter Max will be in high school, we’ll have the technology to feel truly present with another person no matter where they are, and scientific research will have helped cure and prevent enough diseases to extend our average life expectancy by another 2.5 years. 

I’m really glad to see this — as I argued here, the annual challenges had outlived their usefulness.

Meanwhile, here’s your content moderation story of the day, from David Gilbert at Vice. It centers on Facebook moderators in Europe.

One moderator who worked at CPL for 14 months in 2017 and 2018 told VICE News that he decided to leave the company when a manager sanctioned him while he was having a panic attack at his computer. He’d just found out that his elderly mother, who lived in a different country, had had a stroke and gone missing.

“On the day I had the most stress in the world, when I think I might lose my mother, my team leader, a 23-year-old without any previous experience, decided to put more pressure on me by saying that I might lose my job,” said the moderator, who did not want to be identified.

YouTube creator David Dobrik has gotten well over one million downloads on his new digital disposable camera app. YouTubers launching apps isn’t anything new, but the disposable camera idea is tied directly to David B’s brand, and it’s one that fans want to try for themselves. (Julia Alexander / The Verge)

Thanks to YouTuber MrBeast’s viral tree planting campaign, more than 21 million trees will be planted across the United States, Australia, Brazil, Canada, China, France, Haiti, Indonesia, Ireland, Madagascar, Mozambique, Nepal, and the United Kingdom. (Justine Calma / The Verge)

Amazon’s Twitch is facing mounting competition from Facebook. Facebook Gaming was the fastest growing streaming platforms (in terms of streaming hours watched) in December. (Olga Kharif / Bloomberg)

The Chinese version of TikTok, called Douyin, just hit 400 million daily active users. The news was revealed by parent company ByteDance in its annual report this week. (Manish Singh / TechCrunch)

And finally...

Text this number for an infinite feed of AI-generated feet

It’s a big day for foot fetishists, The Next Web reports:

The site relies on a generative adversarial network (GAN) to produce eerily realistic images of feet. Of course, since these are all the figment of a computer’s imagination, you’re bound to see some gruesome deformities. 

Have a wonderful evening and absolutely do not send us any AI-generated feet.

Talk to us

Send us tips, comments, questions, and microtargeted advertisements: and