If you’ve ever wondered about the value of having multiple social networks competing to develop the best products and policies, Wednesday offered us a clear example.
Facebook is now three weeks into a controversy over whether (and how) it ought to regulate political ads, and the lies those ads will inevitably sometimes contain. Lots of folks (including some Facebook employees) have proposed ideas, including banning political ads from the platform altogether.
Today, Jack Dorsey took their suggestion — for Twitter. In a thoughtful thread, Dorsey laid out his case for banning both issue ads and campaign ads. Notably, he honed in on two things that make social ads unique: their speed, and the way they can target small niche communities at scale.
Internet political ads present entirely new challenges to civic discourse: machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes. All at increasing velocity, sophistication, and overwhelming scale.— jack (@jack) October 30, 2019
“Internet political ads present entirely new challenges to civic discourse,” Dorsey tweeted. machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes. All at increasing velocity, sophistication, and overwhelming scale.”
Twitter was not the first platform to ban political ads. It was preceded by LinkedIn, Pinterest, and TikTok, among others. Each of those sites made the calculation that whatever benefits are to be gained from politicians paying to reach voters, they are outweighed by the drawbacks.
Those services are all significant in their own way. But none is a true hotbed for political commentary. Twitter, on the other hand, is the beating heart of political discourse in the United States, and was swarmed by more than 50,000 Russian accounts in 2016 as part of that country’s interference with the US presidential election.
Some have noted that Twitter had little to lose in eliminating political ads, since it makes little revenue from them — less than $3 million. Others noted that political ads on Twitter have never seemed to be particularly effective at influencing voters, raising questions about whether banning them would have any practical effect on the election.
Then there was the timing of the announcement, which came just as Mark Zuckerberg — who gently mocked Twitter’s relatively small investment in platform integrity efforts in the audio we published this month — was just about to start Facebook’s quarterly earnings call.
Still, the reaction on Twitter was hugely positive. (This may have been the first day it was safe for Dorsey to check his mentions in several years.) Several prominent Democrats praised the move, including Joe Biden, Sen. Mark Warner, and Rep. Alexandria Ocasio-Cortez.
Yael Eisenstat, who once led Facebook’s elections Integrity operations team for political advertising, tweeted:
FINALLY, a CEO willing to admit this is not about free speech, it is about profiting off of amplifying lies and the dangerous targeting tools that allow this anti-democratic b.s. to infect our society. Read the entire thread. @Twitter and @jack putting democracy ahead of profit. https://t.co/iwLeve0Gjo— Yael Eisenstat (@YaelEisenstat) October 30, 2019
One person who is not happy about Twitter’s new policy is Brad Parscale, the mastermind behind President Trump’s successful digital advertising strategy in 2016. Parscale said the move would “silence conservatives.”
“Twitter just walked away from hundreds of millions of dollars of potential revenue, a very dumb decision for their stockholders,” Parscale said in a statement. “Will Twitter also be stopping ads from biased liberal media outlets who will now run unchecked as they buy obvious political content meant to attack Republicans? This is yet another attempt to silence conservatives, since Twitter knows President Trump has the most sophisticated online program ever known.”
(Again, based on Twitter’s own figures, it is walking away from at most $3 million.)
A more thoughtful critique came from Jessica Alter, who leads a community for progressive and centrist campaigns. She argued that banning political ads would disadvantage lesser-known and nontraditional candidates by making it harder for them to break through.
3/ This favors people who 1. can pay for these less cost effective forms of media and 2. those who have spent time building a twitter following. As an example of someone like this see Donald J. Trump— Jessica Alter (@jalter) October 30, 2019
And there’s evidence that social media ads help unknown candidates stand out — see this paper from political scientists who found that Facebook ads prompted many down-ballot candidates to run their first advertising campaigns. (They also found that Facebook ads tended to be less negative than TV ads.)
Alter and others argued that the money once spent on political ads on Twitter would simply go dark, with candidates secretly paying influencers to promote them via (still allowed) organic tweets.
Which seems likely enough. To me, the biggest issue with this and any other Twitter policy is enforcement. Twitter has a long history of announcing changes it then has a lot of trouble implementing, and banning any whiff of politics from the advertising platform is going to give them fits.
We know this, because it gives Facebook fits. The company requires political advertisers to register, and any time it asks some borderline advertisers to verify their name and location — a recycling program, say, or a public health campaign for PReP — advertisers howl that they have been unjustly banned. Imagine how loudly they’ll howl when they actually are banned, rather than simply asked to fill out some paperwork.
I expect Twitter to have great trouble distinguishing what is an “issue ad” from what is not. Expect to see many false positives, and many false negatives. And depending on who is affected, and how often, you might even expect to see Congressional hearings over it.
Zuckerberg, during the earnings call, dug his heels in and said the company would continue to sell political ads and not (for the most part) fact-check them. “I believe that the better approach is to work to increase transparency,” he said. “Ads on Facebook are already more transparent than anywhere else.”
On some level, the disagreement between Zuckerberg and Dorsey is just philosophical. Some people will want to permit more speech, whatever the consequences. Others will think they can build a safer community with less. One reason to encourage competition among tech platforms is to provide us with choices.
At the same time, this is also an ongoing political fight, and Zuckerberg may ultimately have to reconsider his approach. Not so much because of pressure from his employees — only around 250 of them signed that letter, out of a global base of 35,000 — but because of pressure from politicians and the public.
The relevant test case here is Adriel Hampton, a San Francisco activist and marketing firm owner who registered to run for governor of California this week. As Donie O’Sullivan reports as CNN, Hampton had but one aim: “Hampton told CNN Business that he will use his new status as a candidate to run false ads on Facebook about President Trump, Facebook CEO Mark Zuckerberg, and other Facebook executives. ... His goal is to force Facebook to stop allowing politicians to run false ads.”
Under the policies it articulated this month, Facebook should let those ads stay up in the name of free speech and political neutrality. But then Facebook surprised me late Tuesday by saying it would stop allowing politicians to run false ads — only just for one politician.
“This person has made clear he registered as a candidate to get around our policies, so his content, including ads, will continue to be eligible for third-party fact-checking,” a Facebook spokesman said in an email to Recode.
So, if you’re keeping score, you’re now allowed to lie in political ads on Facebook, unless you admit that you’re lying. I think. I asked Facebook if someone would walk me through their logic here today, but I didn’t hear back.
It remains to be seen whether Twitter can live up to the promise it made to the public today. But in some important quarters, it seems to have notched a moral victory over its longtime rival. Before yesterday, no matter what you thought about Facebook’s policy on political ads, you at least had to admit that the company’s position was coherent. As of Tuesday evening, that was no longer the case.
Today in news that could affect public perception of the big tech platforms.
Trending down: TikTok users of color say they are underrepresented on the app’s For You Page. They say the app’s most popular faces are consistently white.
⭐ Mark Zuckerberg defended Facebook’s acquisition of Instagram amid the ongoing US antitrust probe. The CEO told investors that the photo sharing app wasn’t a true competitor in 2012, and that it only grew to what it became today because of Facebook’s resources. David McLaughlin at Bloomberg has more:
The takeover of Instagram, which now has 1 billion monthly active users, is seen by some as a deal that shouldn’t have been allowed by the FTC because Instagram posed a real threat to Facebook’s dominance in social media. Facebook will need to show it wasn’t. That’s the argument Zuckerberg outlined on the earnings call. Instagram only had 30 million users at the time and a lot of other competitors, he said.
The FTC knew this at the time, Zuckerberg said.
“I set a goal that we hoped that one day Instagram might reach 100 million people, and I know that that seems quaint today compared to how well it’s done, but remember that a lot of the other services that were Instagram peers and were growing quickly at the time” don’t exist anymore, he said. “The FTC had all this context when they made this decision in 2012.”
Ohio Attorney General Dave Yost said he isn’t sure breaking up Facebook is the right way to fix Big Tech, on stage at a Bloomberg event. “It’s too early to talk about the remedy when you haven’t identified the problem,” he added.
Also: A guide to the antitrust battle’s biggest players, including what they’ve done in the past and what they hope to get out of the fight to break up Big Tech.
Instagram head Adam Mosseri said he’s worried about Facebook’s ability to navigate the 2020 election. Speaking on The Bill Simmons Podcast, the executive acknowledged the company has a long way to go to defend itself against people who want to misuse the platform. (Salvador Rodriguez / CNBC)
Facebook agreed to pay a $644,000 fine to end a UK privacy probe in the wake of the Cambridge Analytica scandal. The company had originally sought to appeal the fine, but decided to settle the case without any admission of guilt. (Stephanie Bodoni / Bloomberg)
Evelyn Douek, a doctoral student at Harvard Law School who writes frequently on content moderation issues, argues Facebook should regulate political ads on the platform. She’s also “cautiously optimistic” about the company’s Oversight Board. (Mathew Ingram and Evelyn Douek / Columbia Journalism Review)
Russia has been testing new disinformation tactics in massive Facebook campaigns in parts of Africa, as part of an evolution of its manipulation techniques ahead of the 2020 US election. Facebook removed three Russian-backed influence networks aimed at Mozambique, Cameroon, Sudan and Libya. (Davey Alba and Sheera Frenkel / The New York Times)
Lithuanians are using software developed in partnership with Google to fight back against fake news — a growing problem in a country besieged by Russian propaganda. The tool tracks disinformation campaigns and tries to pin down their point of origin. (The Economist)
⭐ A Facebook content moderation vendor is exiting the business, following two Verge investigations into working conditions at the company. The firm hired thousands of moderators around the world to remove hate speech and terrorism from platforms like Facebook, Google, and Twitter, says (me!) Casey Newton:
In February, The Verge published an investigation into working conditions at the company’s site in Phoenix. Moderators at the site described being diagnosed with post-traumatic stress syndrome after being subjected to a daily onslaught of graphic and disturbing images. Others said they had come to embrace fringe viewpoints after seeing videos about conspiracy theories on a regular basis. Multiple employees reporting fearing for their safety after being threatened by coworkers.
A follow-up report in February focused on a site in Tampa, FL, where moderators broke their non-disclosure agreements to describe a pattern of mistreatment by managers. They described working in offices that were often filthy, and where cases of sexual harassment had resulted in multiple complaints being filed with the Equal Employment Opportunity Commission.
The Phoenix and Tampa sites will both close after March 1st, Facebook told The Verge in a statement. “Cognizant and Facebook are committed to a smooth transition during this period of change,” a Facebook spokesman said.
Facebook had a better-than-expected quarter, with daily active user count growing 9 percent to 1.62 billion and revenue reaching 17.7 billion. Mark Zuckerberg took the earnings call as an opportunity to defend the company’s policy on political ads. (Nick Statt / The Verge)
As TikTok’s growth slows, the company faces a challenger in the form of a Chinese short-form video-sharing app called Likee. Launched two years ago, Likee now has 81 million monthly users, making it the second most popular video-sharing app after TikTok. (Yunan Zhang / The Information)
YouTuber Lindsay Ellis is fighting a copyright claim from Universal Music Group (UMG) that she said put one of her brand sponsorships in jeopardy. Ellis is arguing that it’s an “extremely clear-cut example of fair use” that YouTube is choosing to ignore. (Amanda Perelli / Business Insider)
YouTube creators may have cracked the company’s monetization algorithm, by reverse engineering the P-score used to determine which videos get access to high-value advertising opportunities. The community is saying they now have proof that the platform prioritizes family-friendly videos from mainstream outlets over the work from creators. (Chris Stokel-Walker / FFWD)
The average time kids spend watching online videos, mostly on YouTube, has doubled in 4 years. New research from the nonprofit Common Sense Media says it’s gone up to about an hour a day. (Rachel Siegel / The Washington Post)
Thanks to Ben Collins for pointing us to the very funny response of Russian state media to Twitter’s ban on political ads.