The order directs Commerce Secretary Wilbur Ross to request new regulations from the Federal Communications Commission to determine whether a social media company is acting “in good faith” to moderate content.
In theory, that could open the door to users suing social media platforms if they feel their posts are restricted inappropriately. But it could also make the companies more likely to take down false or misleading content rather that just add a disclaimer — the opposite of what Trump wants.
“That’s the irony of all this,” said Nathaniel Persily, a Stanford University law professor who studies technology and democracy. “The platforms will be much more aggressive in their automated filtering to go after content that could raise their legal liability.”
An executive order like this had first been proposed last August, after the White House invited ordinary Americans to share stories about times when they felt they had been unfairly censored by social networks. According to Issie Lapowsky and Emily Birnbaum at Protocol, Trump ordered that his staff “do something” about Twitter labeling his tweets, and so “they picked this [order] off the shelf and essentially rammed it through,” according to an unnamed official.
As to the merits of the president’s complaint: independent audits have found that social media posts by liberals and conservatives get similar levels of engagement, but conservatives have consistently made claims of discrimination anyway based on anecdotes. Even as Fox News consistently gets more engagement on Facebook than almost any other publisher, conservatives have come to define “bias” ever downward — so that it now covers any outcome they don’t like, whether it’s poor placement in search results, the removal of bot followers, or fact-checking.
That has led an increasing number of conservatives to sue social networks alleging infringements on their rights. What these cases have in common is that courts keep throwing them out, as Adi Robertson reported this week in The Verge, in a piece that surveys a large handful of recent attempts.
Interestingly, to the extent that there’s a nexus between Twitter and the First Amendment, courts have found that it is found when the president blocks users — a practice they have found to be unconstitutional, Robertson writes:
People have been suing internet platforms for banning them since long before Trump took office; back in 2009, for instance, a PlayStation Network user sued on the grounds that Sony had created a “company town.” (The user lost.) Courts have overwhelmingly concluded that social media networks can ban, limit, or otherwise suppress users’ posts.
Conversely, government figures like Trump actually face strict rules about blocking users. Last year, a court required Trump to unblock Twitter accounts that had criticized him, determining that his Twitter account specifically — not the site as a whole — constituted a public space protected by the First Amendment. Other public officials have lost similar lawsuits from constituents.
That leads to the question of what practical effect today’s executive order will have, and Democratic members of Congress, legal scholars, academics, and most journalists I follow have been united in predicting it will not survive legal challenges. Here’s Russell Brandom with a good, concise explanation of the reasons in The Verge:
The biggest one is the First Amendment, which prevents the US government from limiting private speech. Telling Twitter how and when it can moderate is going to look an awful lot like limiting the company’s private speech — particularly when the inciting incident was about adding content rather than blocking it. In practical terms, it means that there is certain to be a court challenge alleging that the order is unconstitutional, which will hamstring any attempted action by the FCC.
That’s not the only legal problem, although I’m not sure we have room to run through all of them here. It’s not clear that the FCC has the authority to do any of this on the basis of an executive order. It’s really not clear that you can change 230 (which is part of a law, let’s remember) without congressional approval. And even if you could, all the usual concerns about changing 230 still apply. This wouldn’t just hit Twitter. The FCC would suddenly be in charge of YouTube, Craigslist, and every comments section on the internet.
Yesterday I noted here that while Trump’s bluster against social networks often results in a flurry of coverage, it hasn’t ever really gone much further. Well, this is going further. If the courts strike it down, as everyone expects, then in retrospect it will just look like more bluster. But if Trump finds a legal footing, then a lot of sites on the internet are going to be in trouble — and not just social networks, by the way.
And even if he doesn’t, we’ll still have seen a deeply disturbing encroachment of the federal government on actual free speech — part of a new surge in American authoritarianism that threatens our internet, our elections, and so much more. Today the president is focused on a handful of social networks that have challenged his power. But it still seems both obvious and necessary to say that if he wins, he won’t stop there.
Given that the executive order threatens every social platform equally, you might expect some level of solidarity in the corporate response. And trade groups to which the big platforms belong did put out statements condemning the order as unworkable nonsense.
But Mark Zuckerberg raised some eyebrows Wednesday night when he appeared on Fox News and appeared to draw a distinction between Facebook’s approach to moderating speech and Twitter’s:
“We have a different policy than, I think, Twitter on this,” Zuckerberg told ”The Daily Briefing” in an interview scheduled to air in full on Thursday.
“I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online,” he added. “Private companies probably shouldn’t be, especially these platform companies, shouldn’t be in the position of doing that.”
I don’t think that Facebook or internet platforms, in general, should be arbiters of truth. I think that’s kind of a dangerous line to get down to, in terms of deciding what is true and what isn’t. And I think political speech is one of the most sensitive parts of a democracy. And people should be able to see what politicians say. And there’s tons of scrutiny already—political speech is the most scrutinized speech already by a lot of the media. And I think that that will continue. [...]
You know, just because we don’t want to be determining what is true and false, you know, doesn’t mean that politicians or anyone else can just say whatever they want. And our policies are grounded in trying to give people as much voice as possible while saying, if you’re going to harm people in specific ways ... we will take them down no matter who says that.
Zuckerberg then mentions a case in which Facebook removed a post by the president of Brazil. “There are lines, and we will enforce them,” he said. “But I think in general you want to give as wide of a voice possible. And I think you want to have a special deference to political speech.”
The weird thing about all this is that, as best I can tell, Facebook and Twitter’s policies around election misinformation are essentially the same. Facebook introduced policies prohibiting voter suppression and intimidation in 2018, and expanded its guidelines in October. The policies prohibit:
Misrepresentation of the dates, locations, times and methods for voting or voter registration (e.g. “Vote by text!”); misrepresentation of who can vote, qualifications for voting, whether a vote will be counted and what information and/or materials must be provided in order to vote (e.g. “If you voted in the primary, your vote in the general election won’t count.”); and threats of violence relating to voting, voter registration or the outcome of an election.
Twitter adopted similar rules this month.
You can argue that Trump’s baseless warnings about voter fraud related to voting by mail haven’t yet “misrepresented methods for voting or voter registration.” But you can’t say Facebook isn’t an arbiter of truth on election information. If you go on Facebook tonight and post that “Republicans vote a week later Democrats,” Facebook will remove that post without even sending it to a fact-checker first.
Of course, Jack Dorsey complicated all of this by posting a confusing Twitter thread in which he said of the company’s decision to label two of Trump’s tweets: “This does not make us an ‘arbiter of truth.’ Our intention is to connect the dots of conflicting statements and show the information in dispute so people can judge for themselves.” Maybe Twitter isn’t playing truth-decider in this case — it just added a link to some tweets, after all — but it does so in plenty of other cases, and in fact updated its policies just this month so it could do so more often.
So what was Twitter’s policy rationale for labeling Trump’s tweets? Will Oremus has a tick-tock in One Zero that explains how the tweets traveled through the company. A third-party fact checker flagged them as potentially violating rules against election misinformation that date back to 2018. Twitter decided they did not, but a secondary review found that they could be eligible for one of the company’s new labels, which it introduced this month as part of an effort to fight misinformation about COVID-19.
Twitter said it made the decision to add links to Trump’s tweets about vote-by-mail fraud in line with this new policy, even though the connection between vote-by-mail and COVID-19 may not be clear to many people. (The idea is that more people will want to vote by mail to avoid getting sick at the polls, which happened to 52 people during the recent election in Wisconsin.)
Given everything Trump tweeted before Twitter decided to label two of his tweets — threatening nuclear war comes to mind — it does seem strange the company chose mail-in-ballots, of all places, to challenge the president. It would have been a more clear-cut violation if the president had tweeted “Democrats aren’t allowed to vote in November,” for example, or “I’m canceling the election.” But the spirit of Trump’s tweets is to suppress voter turnout and misrepresent the legitimacy of legally cast, mail-in ballots — and that seems like as good a content-moderation hill for a company to die on as any other.
It is messy, though, and while Twitter has always been messy, Facebook works hard to keep things consistent. And this is the basic reason Trump’s tweets got a label on Twitter but the same words, cross-posted to Facebook, went unchanged. Trump might have walked right up to the line but with his threats about voter fraud, but Facebook didn’t see a clear violation. The real policy, as ever, is what you enforce. Twitter took a novel approach to putting limits on the president; that’s fine for Twitter, but Facebook is staying out of it.
Today in news that could affect public perception of the big tech platforms.
🔼 Trending up: Google is giving 5,300 local newsrooms around the world funding to survive the pandemic. The grants range from from $5,000 - $30,000. (Google)
🔼 Trending up: TikTok partnered with 800 creators who’ve been affected by the pandemic to create learning content on the platform. The creators get grants from TikTok’s $50 million Creative Learning Fund. (TikTok)
🔼 Trending up: Google partnered with The National Alliance on Mental Illness to help people struggling with anxiety during the coronavirus crisis. Now, people who search for information about anxiety on Google will see a clinically-validated questionnaire along with symptoms and common treatments. (Google)
🔽 Trending down: Amazon.com was down for many people in the US for a short while Thursday afternoon. The news doesn’t look good for the company that’s prided itself on reliability. (Jay Peters / The Verge)
Total cases in the US: More than 1,721,200
Total deaths in the US: At least 101,200
Reported cases in California: 102,565
Total test results (positive and negative) in California: 1,736,894
Reported cases in New York: 371,559
Total test results (positive and negative) in New York: 1,811,544
Reported cases in New Jersey: 157,815
Total test results (positive and negative) in New Jersey: 660,325
Reported cases in Illinois: 114,612
Total test results (positive and negative) in Illinois: 803,973
⭐ Twitter is continuing to fact-check Donald Trump’s tweets as the war between the president and social media platforms escalates. And other people’s tweets, too. Here are Kate Conger and Mike Isaac at The New York Times:
Late Wednesday, it added fact-checking labels to messages from Zhao Lijian, a spokesman for China’s foreign ministry who had claimed that the coronavirus outbreak may have begun in the United States and been brought to China by the U.S. military.
Twitter also added notices on hundreds of tweets that falsely claimed a photo of a man in a red baseball cap was Derek Chauvin, an officer involved in the death of George Floyd, an African-American man who died this week after being handcuffed and pinned to the ground by police. The Twitter label alerted viewers that the image was “manipulated media.”
A German official suggested Twitter reincorporate in Germany if things get too bad with Trump. “Here you are free to criticize the government as well as to fight fake news,” tweeted Thomas Jarzombek, who works on economic development. (Douglas Busvine / Reuters)
Facebook is rolling out a new policy to limit inauthentic behavior. The company is going to require that the people behind “high reach” profiles verify their identity. Viral posts from unverified accounts will have limited reach. This is a really interesting move, and I’m planning to learn more and share soon. (Taylor Lyles / The Verge)
Researchers are trying to “flatten the curve” of the infodemic by rooting out coronavirus misinformation. They say the battle that can’t be won completely — it’s just not possible to stop people from spreading ill-founded rumors. (Philip Ball and Amy Maxmen / Nature)
Democrats in Congress are joining the GOP fight against TikTok. They’re calling on the Federal Trade Commission to investigate the app for allegedly violating the Children’s Online Privacy Protection Act. (Alexandra Levine / Politico)
The ACLU is suing the facial recognition firm Clearview AI for alleged privacy violations. The complaint says Clearview illegally collected and stored data on Illinois citizens in violation of the Biometric Information Privacy Act. (Nick Statt / The Verge)
Kickstarter employees were the first white-collar technology workforce to unionize in US history. This article describes how they pulled it off. (Bryce Covert / Wired)
The tenant screening industry is growing, fueled by the rapid expansion of rentership in the US. The companies produce cheap and fast reports for an estimated nine out of 10 landlords across the country. But the reports are extremely flawed, attributing crimes to perspective tenants that they never committed. (Lauren Kirchner and Matthew Goldstein / The Markup and The New York Times)
⭐ Amazon plans to offer permanent jobs to about 70 percent of the people it hired to temporarily meet consumer demand during the coronavirus pandemic. The company will begin telling 125,000 warehouse employees in June that they can keep their roles longer-term. Jeffrey Dastin at Reuters has the story:
The decision is a sign that Amazon’s sales have increased sufficiently to justify an expanded workforce for order fulfillment, even as government lockdowns ease and rivals open their retail stores for pickup.
Amazon started the hiring spree in March with a blog post appealing to workers laid off by restaurants and other shuttered businesses, promising employment “until things return to normal and their past employer is able to bring them back.”
ByteDance is shifting TikTok’s power out of China amid ongoing regulatory scrutiny. The company has expanded its engineering and research operations in Mountain View and hired a New York-based investor relations director to stay in touch with major investors. (Yingzhi Yang, Echo Wang and Alexandra Alper / Reuters)
Kuaishou, the second-largest social video app in China, is launching an app in the US to challenge TikTok. The app, called Zynn, allows users to upload, edit and share short videos. In a twist, it’s also paying users to watch content and recruit other users. (Yunan Zhang / The Information)
Charli D’Amelio is TikTok’s biggest star. This profile tries to unpack why. (Travis M. Andrews / The Washington Post)
Snap is planning to let other companies build pared-down versions of their mobile apps within Snapchat. The move mimics the approach of popular Chinese social app WeChat. (Alex Heath / The Information)
A YouTuber with hundreds of thousands of followers who shared her family’s experience of adopting a toddler from China announced that she and her husband had permanently placed their child with another family after unspecified behavioral issues. The YouTuber spent years creating — and monetizing — content with her now-former son. (Stephanie McNeal / BuzzFeed)
YouTube is full of scams advertising access to OnlyFans content for free if you follow a few steps to “unlock” premium accounts. The videos instruct users to download and interact with apps for a certain time in exchange for access to OnlyFans that never materializes. (Samantha Cole / Vice)
There’s a scientific explanation for why Zoom is so exhausting. Looming heads, staring eyes, a silent audience, and that millisecond delay disrupt normal human communication. (Betsy Morris / The Wall Street Journal)
Things to do
Stuff to occupy you online during the quarantine.
Start a watch party with friends on Hulu. Even if the appeal of this joint viewing thing continues to escape me.
Read an oral history of YouTube’s early days. “The office itself was disgusting,” a former content moderator reveals!
Schedule a tweet using Twitter’s web app — and know what it is to feel truly alive for the first time.