Programming note: The Interface will be off Monday for Labor Day.
I. The announcements
After months of deliberations, Facebook gave its answer to the critics who have called for it to put new restrictions on political advertising. The company said it would not accept new political ads in the seven days leading up to the Nov. 3rd US presidential election, but would allow those that had already been approved to continue running. The move was framed as a compromise: campaigns can continue to use Facebook for get-out-the-vote efforts through Election Day, but they’ll lose the ability to test new messages. As a result, it might be harder for candidates to spread misinformation in the final days of the campaign.
There’s a lot to say about the limits and implications of this approach. But there’s also much more to Facebook’s announcement, which included a broad set of measures intended to limit the ability of, uh, someone to spread lies about election safety, voting procedures, and the legitimacy of the outcome.
I covered the announcements at The Verge, and it’s worth reading all of them. The other big highlights include limiting forwarding in Messenger to five people per message; promoting accurate voter information at the top of Facebook and Instagram through the election; providing live, official election results through a partnership with Reuters; and adding labels to posts that attempt to declare victory before the results are official, or try to cast doubt on the outcome.
Another notable dimension about the announcements is the way they were announced. They came not from the corporate blog but from CEO Mark Zuckerberg himself, in a Facebook post. And he struck an unusually direct note of concern:
“The US elections are just two months away, and with COVID-19 affecting communities across the country, I’m concerned about the challenges people could face when voting,” he wrote. “I’m also worried that with our nation so divided and election results potentially taking days or even weeks to be finalized, there could be an increased risk of civil unrest across the country.”
“This election is not going to be business as usual. We all have a responsibility to protect our democracy.”
II. The reaction
How far will Facebook’s announcements this week go to, as Zuckerberg says, protect our democracy?
I think the moves will go a long way toward promoting voter registration and turnout. The Reuters partnership will ensure that a huge number of Americans sees accurate, real-time information about the vote count. And the various policies announced to remove or label problematic posts could inject a welcome dose of reality into the more unhinged conspiracy theories about the election that are now swirling in the fever swamps.
At the same time, as Steve Kovach notes at CNBC, the policies announced Thursday have some obvious limitations. Misinformation in political advertising can continue right up until Election Day, so long as it has been running for at least a week. By the time the new restrictions kick in, mail-in voting will have been underway for weeks. And no label will be able to stop Trump from declaring that he has won, loudly and repeatedly.
Meanwhile, on Twitter, Zeynep Tufekci raises the larger point always lurking in the background of these discussions. “There are the details,” she wrote, “and there is this: Mark Zuckerberg, alone, gets to set key rules — with significant consequences — for one of the most important elections in recent history. That should not be lost in the dust of who these changes will hurt or benefit.”
I think all of that is fair, and yet I’ve struggled to land on an overall point of view on Facebook’s approach to regulating political speech. The question I keep coming back to is: what exactly is Facebook trying to solve for?
III. The solve
By now, almost everyone accepts that social platforms have a role to play in protecting our democracy — as do average citizens, journalists, and the government itself. In 2016, all four of those groups failed in various ways, and we’ve spent much of the intervening period litigating who was most at fault, and what ought to be done about it.
One way to view Facebook’s announcements on Thursday is as an acknowledgement that when it comes to protecting our democracy, in 2020 the US government cannot be counted upon. Just this week, the president effectively told voters in North Carolina to vote twice — sending in a mail-in ballot, then showing up at the polls to vote again. He has sought to sabotage the post office to make voting by mail more difficult. He won’t commit to leaving office should he lose the election — and “jokes” about never leaving office, period.
None of these are issues a tech platform can solve. But because of their perceived power, the platforms are under strong pressure to take decisive action in response. And they are taking it seriously, Axios reported today, structuring a serious of war-game exercises to prepare for various election disaster scenarios:
Facebook, Google, Twitter and Reddit are holding regular meetings with one another, with federal law enforcement — and with intelligence agencies — to discuss potential threats to election integrity.
Between March 1 and Aug. 1, Twitter practiced its response to scenarios including foreign interference, leaks of hacked materials and uncertainty following Election Day.
Meanwhile, the president continues to use the platforms in transparently anti-democratic ways. On Thursday, while still under criticism for his remarks about North Carolina, he repeated his instructions to all voters that they should both mail in a ballot and show up to vote in person. The post appeared both on Twitter and on Facebook, and both companies left it up. Twitter placed it under a warning label after determining the post could lead people to vote twice, and also prevented people from retweeting it or replying. Facebook added a label underneath saying that mail-in voting has been historically trustworthy.
The basic idea here is to allow for a maximum of political speech, and to answer the most problematic speech with more speech, in the form of labels. The platforms have offered no positive conception of what political speech should be or do there. Instead, they police it as beat cops, running off the worst posts while writing speeding tickets for lesser offenses.
The idea rests upon a foundational belief that both parties are good-faith actors when it comes to political speech, all available evidence to the contrary. And it’s this, more than anything else, that has resulted in Facebook’s strange contortions on the subject. As the press critic and New York University professor Jay Rosen put it:
“The media ecosystem around one of our two major parties runs on made up claims and conspiracy theories. Facebook has institutionally committed itself to denial of this fact. It also says it has rules against spreading misinformation. The two commitments are in conflict.”
It’s in such a world that Facebook can make a host of changes to its policies in response to the actions, both actual and predicted, of President Trump, without ever saying the words “President Trump” at all. Company executives clearly feel a moral obligation to act against a grave threat to American democracy — but they cannot bring themselves to name the threat. This posture of impartiality, which Rosen calls “the view from nowhere,” has long been the default stance of the American media.
But it has been in decline for some time now, and for good reason. When you commit yourself to the view from nowhere, you will find, over and over again, that you are being played.
It’s in this sense that the steps Facebook is taking today can be viewed as positive, and also in some larger sense as being beside the point. If you are working at a big social platform and find yourself concerned about the degree to which it is enabling fascism, it’s not enough to simply adjust the boundaries of discourse.
You have to do something about the fascism.
IV. A parable
A headline from Wednesday evening in The Daily Beast: “Facebook’s Internal Black Lives Matter Debate Got So Bad Zuckerberg Had to Step In.”
The story, by Maxwell Tani and Spencer Ackerman, recounts a controversy that broke out inside the company when one of its 50,000 employees posted a short essay to its internal Workplace forum titled “In Support of Law Enforcement and Black Lives.” The essay, which was posted on Monday, sought to defend police officers in the wake of Wisconsin cops shooting Jacob Blake seven times in the back and leaving him paralyzed. Tani and Ackerman write:
The post called into question the notion of racially disparate outcomes in the criminal-justice system, argued that racism is not a serious motivation in police shootings, railed against “critical race theory,” and claimed narratives about police violence often “conveniently leave out” other factors, including whether the victim was under the influence of drugs or complied with officers’ directives. [...]
“My heart goes out to the Blake family,” the staffer wrote on Friday. “It also goes out to the well-intentioned law enforcement officers who have been victimized by society’s conformity to a lie.” The staffer continued: “What if racial, economic, crime, and incarceration gaps cannot close without addressing personal responsibility and adherence to the law?”
On enterprise Facebook, just as it might have on consumer Facebook, the controversial post generated much outrage and engagement. It bubbled to the top of the feeds, and inspired many anguished comments. Its polite, just-asking-questions tone, coupled with clear endorsement of a system that has terrorized Black Americans for centuries, put the company’s commitment to free speech in the workplace to the test. If left unchecked, the post threatened to undermine faith in company leadership.
On consumer Facebook, the post would have stayed up even if it had been reported. But on enterprise Facebook, the post occasioned some reflection. Zuckerberg wrote a note affirming that “systemic racism is real,” and chided “some” employees for not considering the full weight of their words on their Black colleagues. (I obtained a copy.) In response, he said, Facebook would soon move “charged topics” to “dedicated spaces” within Workplace, and added that these forums would have “clear rules and strong moderation.”
“You won’t be able to discuss highly charged content broadly in open groups,” he said. “As you know, we deeply value expression and open discussion, but I don’t believe people working here should have to be confronted with divisive conversations while they’re trying to work.”
This is a view from somewhere. It is a positive conception of how a discussion ought to take place. Not just what words or symbols are allowed or disallowed, but how it should be constructed. I have no doubt it will make Facebook a better place to work. And I wonder whether the version of Facebook the rest of us would not benefit from similarly decisive intervention.
Today in news that could affect public perception of the big tech platforms.
🔼 Trending up: Google released a dataset of search trends for researchers to study the link between symptom-related searches and the spread of COVID-19. The goal is to help researchers understand where new outbreaks might occur. (Google)
🔼 Trending up: Pinterest announced it will no longer show ads to users when they search for elections-related terms on the platform. The company also said employees will get paid time off to vote. (Megan Graham / CNBC)
⭐ The Justice Department plans to bring an antitrust case against Google as soon as this month. Attorney General William Barr overruled lawyers who said they needed more time to build a strong case against the tech giant, and underscored fears that the investigation has been tainted by politics. Katie Benner and Cecilia Kang at The New York Times have the story:
A coalition of 50 states and territories support antitrust action against Google, a reflection of the broad bipartisan support that a Justice Department case might have. But state attorneys general conducting their own investigations into the company are split on how to move forward, with Democrats perceived by Republicans as slow-walking the work so that cases can be brought under a potential Biden administration, and Democrats accusing Republicans of rushing it out under Mr. Trump. That disagreement could limit the number of states that join a Justice Department lawsuit and imperil the bipartisan nature of the investigation.
Some lawyers in the department worry that Mr. Barr’s determination to bring a complaint this month could weaken their case and ultimately strengthen Google’s hand, according to interviews with 15 lawyers who worked on the case or were briefed on the department’s strategy. They asked not to be named for fear of retribution.
Facebook removed a video of the president’s remarks about North Carolina, citing its policies against voter fraud. The company said people can share it if they do so to correct the record. (Ashley Gold / Axios)
As part of Facebook’s study on how social media impacts democracy, the company is paying some users to log off of its products ahead of the 2020 US presidential election. The payments range between $10 and $20 per week, as some users would be asked to deactivate for one week while others could be asked to leave the platform up to six weeks total. (Makena Kelly / The Verge)
The Department of Homeland Security stopped the publication of a memo that described Russian attempts to denigrate Joe Biden’s mental health. The unusual move has prompted new scrutiny of political influence at the department. (Zolan Kanno-Youngs / The New York Times)
Mark Zuckerberg said the company removed a militia event where people discussed gathering in Kenosha, Wisconsin, to shoot and kill protesters. But in fact, the militia took down the event themselves the day after two people were killed. (Ryan Mac and Craig Silverman / BuzzFeed)
Activists are calling on Facebook to ban armed event listings in the wake of the Kenosha shooting. They also called for a broad enhancement of Facebook’s moderation against extremism, including more automated tools for proactive enforcement and better systems for detecting event pages that promote violence. (Russell Brandom / The Verge)
Facebook banned a member of India’s ruling party for violating its policies against hate speech. The move reversed an earlier decision, led by Facebook policy executive Ankhi Das, not to punish the politician. Das said it could hurt the company’s business interests in the country. (Newley Purnell and Rajesh Roy / The Wall Street Journal)
A Facebook video of an assault led to the arrest of seven men after it was found by the victim’s mother. The video showed the men assaulted the 16-year-old while she was unconscious. (Michael Levenson / The New York Times)
The Lafayette city government is suing the man behind a series of satirical antifa Facebook events that police responded to this summer. The lawsuit says the hoaxes cost taxpayers a considerable amount. The man said he’s using satire as a form of activism and protest. (Megan Wyatt / The Acadiana Advocate)
China emphasized its power in the TikTok sale, saying it has the right to approve or block the sale of technology abroad. The government’s decision to add several artificial intelligence features to a list of export-restricted technologies has thrown a wrench in the TikTok deal. (Bloomberg)
SoftBank is starting to put together a bid for TikTok in India. The firm is said to be assembling a group of investors, and is actively looking for local partners. (Pavel Alpeyev, Giles Turner and Sarah McBride / Bloomberg)
Amazon Flex drivers say they are not surprised the company has been spying on them in private Facebook groups. “...We are watched to prevent any mass resistance, which could bother Amazon,” said the admin of one group. Amazon has now ended the social media monitoring program. (Lauren Kaori Gurley / Vice)
⭐ Apple will delay the enforcement of a controversial change to its mobile operating system that would upend how ads are targeted on iPhones and iPads. The change in iOS 14, the next version of Apple’s mobile software, will require developers to ask users to share their device’s unique identifier for advertising purposes through a prompt. Here’s Alex Heath at The Information:
Apple has positioned the new prompt as a pro-privacy move that puts users in control of their data. But the proposed change has caused panic among marketers and developers that rely on targeted ads to reach consumers. Mobile developers and advertisers who spoke to The Information said they’ve had little time to prepare for the change, announced in June of this year, and that Apple hasn’t provided a clear workaround that lets them target their ads without the IDFA.
After this story was published Thursday, Apple confirmed that it would delay the enforcement of its IDFA prompt until 2021. Developers will still be able to ask users for permission to share their IDFA when iOS 14 is released this fall, though asking users through the prompt won’t be mandatory.
Snapchat had its single largest month of first-time downloads since May 2019 in August amid TikTok uncertainty. The app saw approximately 28.5 million new installs last month. (Sarah Perez / TechCrunch)
Instagram launched a separate tab for Reels in India, two months after launching the feature. Will this help the feature take off globally? (Anumeha Chaturvedi / The Economic Times)
Facebook released details about an experiment on “perceptual superpowers” — AR systems that figure out what you’re trying to hear, then amplify it and dampen background noise. The project shows how sound could play a major role in augmented reality. (Adi Robertson / The Verge)
Facebook’s streaming platform, Facebook Watch, has reached 1.25 billion monthly users. If you count watching one minute of video as a user. Which, come on. (Todd Spangler / Variety)
Facebook Watch introduced a new feature called “Your Topics” that will allow you to further personalize your feed. Sorry, I only watch Watch one minute per month. (Sarah Perez / TechCrunch)
Talk to us
Send us tips, comments, questions, and the most controversial post inside Facebook right now: email@example.com and firstname.lastname@example.org.