Every tech platform has two policies about what they will allow: the policy that’s written, and the policy that’s enforced. Ideally there would be no gap between these, but in practice it almost can’t be helped.
To take one dumb example, the embryonic audio social network Clubhouse posted its first set of community guidelines the other week. One of these rules says “we prohibit the spread of or attempts to spread false information or news.” In theory, this means that only the truth is allowed on Clubhouse. Not a bad policy, really, but when you’re building a network that someday anyone will be able to use to essentially broadcast their phone calls, I imagine that Clubhouse will find it extremely hard to enforce. Of course people will lie on Clubhouse. They will lie all the time. The question is who will be punished for doing so.
I thought about all this on Tuesday night after reading that Twitter had banned 7,000 accounts and put restrictions on 150,000 more related to the sprawling conspiracy theory slash alternate reality game slash religion QAnon. Ben Collins and Brandy Zadrozny had the scoop at NBC News:
Twitter will stop recommending accounts and content related to QAnon, including material in email and follow recommendations, and it will take steps to limit circulation of content in features like trends and search. The action will affect about 150,000 accounts, said a spokesperson, who asked to remain unnamed because of concerns about the targeted harassment of social media employees.
The sweeping enforcement action will ban QAnon-related terms from appearing in trending topics and the platform’s search feature, ban known QAnon-related URLs and prohibit “swarming” of people who are baselessly targeted by coordinated harassment campaigns pushed by QAnon followers.
The ban on “swarming” is particularly significant. Often conspiracy theorists will falsely declare various celebrities and politicians as being at the heart of some lunatic Satan-worshiping scheme, and their followers will descend on the timelines of the accused — and their friends and family — to wreak havoc.
And yet, how will swarming be detected? Twitter updated its policy by tweeting it, which means the update itself was brief to the point of being opaque. You will be suspending if you are found to be “coordinating abuse around individual victims.” OK. But what if you’re just one of thousands of accounts in the victim’s mentions tweeting “stop eating babies”? I don’t see anything in the policy about that — perhaps because, from an enforcement, perspective, it could be difficult to determine what makes abuse “coordinated.” As Evelyn Douek notes here, Twitter itself has not yet defined what it means by coordination.
In any case, it seems that Twitter won’t be the only network to take a harder stance against QAnon. Facebook plans to escalate its own enforcement against its adherents, Kate Conger reports at the New York Times. She also offers a good concise description of what QAnon is, for those still unaware:
Facebook is preparing to take similar steps to limit the reach of QAnon content on its platform, said two Facebook employees with knowledge of the plans, who spoke on the condition of anonymity. The company has been coordinating with Twitter and other social media companies and plans to make and announcement next month, the employees said. Facebook declined to comment.
The QAnon theories stem from an anonymous person or group of people who use the name “Q” and claim to have access to government secrets that reveal a plot against President Trump and his supporters. That supposedly classified information was initially posted on message boards before spreading to mainstream internet platforms and has led to significant online harassment as well as physical violence.
All of this is welcome news, since QAnon conspiracists have increasingly been associated with real-world violence. (Pizzagate, a forerunner of QAnon, famously led to a gunman shooting up a pizza restaurant.)
At the same time, QAnon is also a political movement. By some counts, there are a dozen Republicans now running for national office who profess allegiance to QAnon. Some are expected to win seats in Congress. And so will they be affected by the ban?
“A Twitter spox tells me that ‘currently candidates and elected officials will not be automatically included in many of these actions broadly,’” CNN’s Oliver Darcy reported. That suggests that some of these QAnon adherents who win Congressional seats could continue to use Twitter to promote the idea that there is a “deep state” devoted to child murder, potentially inciting their followers to violence. It’s an admittedly difficult position for Twitter (or Facebook) to be in — the bar for de-platforming an elected official should be high. And yet it’s hard to imagine that members of the QAnon caucus won’t clear it.
It’s easy to announce some strict new policy against some violent group and get some praise. But we’ve seen enough of these adjustments recently — I’m thinking here of Facebook’s Boogaloo ban — to know that the devil is very much in the details. I believe Twitter intends to eliminate as much QAnon content from the platform as it can. But I’m less sure, looking at its flimsy tweets on the subject, how it will.
On Tuesday I wrote about the New York Times’ Kevin Roose and his tweets about the most popular links on Facebook, which have aggravated company executives by painting what they say is a misleading picture of the News Feed. I got an unusually polarized response, and wanted to share it here. The pro-Roose side, which you can get a flavor of in the responses to CNN reporter Oliver Darcy’s tweet about my story, is about what you would expect: Facebook bad, go Kevin.
Meanwhile, people in Facebook’s orbit criticized me for going too easy on Roose’s tweets. The data he’s using comes from CrowdTangle, a Facebook-owned tool for analyzing the spread of stories on social networks. But what it measures is activity on Facebook publisher pages, not links shared throughout the ecosystem overall. And the people who interact heavily with Facebook publisher pages are not typical Facebook users — they’re more partisan than average, and that distorts the picture painted in the lists, people told me. But because that picture fits reporters’ general view of Facebook, which is that it is is a powerful engine for promoting the conservative movement and its leading figures, we’ve all accepted the idea that CrowdTangle data is a good gauge of what’s popular on Facebook.
I don’t know. Open CrowdTangle. “The easiest way to keep track of what’s happening on social media,” a banner blares. Log in, and a prominent tab called “Leaderboard” links you to a list of publishing categories that you can sort by interactions or views. There is not a disclaimer when you log in that says, “just so you know, CrowdTangle data paints a misleading picture of what’s really happening on Facebook.” Instead it offers a bunch of data in attractive charts about content that very much appears to be popular. Hence the leaderboard!
Anyway, if Facebook wants to do something about the way reporters like Roose represent data on a tool that Facebook itself owns, it seems to me that they could start with the tool rather than the tweets.
Last month I wrote about how the independent developer Basecamp successfully bent Apple’s App Store policies after complaining about the company’s anticompetitive practices surrounding its own email app and certain revenue it hoped to extract from Basecamp’s email app Hey. Now an antitrust hearing has been scheduled on Monday in the House of Representatives, where Apple CEO Tim Cook is expected to testify and take questions on App Store competition.
Ahead of that, today Apple held a briefing to announce that it had commissioned a study of the commissions taken by various other e-commerce players. Economists at something called the Analysis Group were called in to look up the various cuts taken by Google, Airbnb, and other platforms, and they did, and on Wednesday they published a PDF. They were neither asked nor allowed to comment on the significance of the findings or the fairness of the App Store.
Anyway, OneZero’s Will Oremus had my thoughts exactly when he said: “I wrote in depth about the antitrust case against Apple in February. The core argument was never that the 30% rate was out of line with the industry. It’s that developers had no choice but to pay it, even while competing with Apple’s own apps that do not.”
Today in news that could affect public perception of the big tech platforms.
🔼 Trending up: Facebook is creating new teams at Facebook and Instagram to study racial bias in its products. The “equity and inclusion team” at Instagram will examine how Black and Hispanic users in the US are affected by the company’s algorithms. (Deepa Seetharaman and Jeff Horwitz / The Wall Street Journal)
⭐ Slack filed a complaint against Microsoft with the European Commission, accusing the tech giant of using its market power to try to crush competition. Unlike the other big tech companies, Microsoft has escaped antitrust scrutiny so far. New York Times reporter Steve Lohr has the details of the complaint:
Slack claims that Microsoft has illegally tied its collaboration software, Microsoft Teams, to its dominant suite of productivity programs, Microsoft Office, which includes Outlook, Word, Excel and PowerPoint. That bundling tactic, Slack contends, is part of a pattern of anticompetitive behavior by Microsoft.
“Slack threatens Microsoft’s hold on business email, the cornerstone of Office, which means Slack threatens Microsoft’s lock on enterprise software,” Jonathan Prince, vice president of communications and policy at Slack, said in a statement.
Republicans are calling on Twitter CEO Jack Dorsey to testify at the upcoming antitrust hearing alongside the CEOs of Amazon, Apple, Facebook and Google. In a letter to Democratic leadership, Jim Jordan, (R-OH), wrote: “We believe there is bipartisan interest to hear from Twitter about its power in the marketplace, its role in moderating content on its platform, and the causes for its recent highly publicized security breaches.” (Lauren Feiner / CNBC)
Italy’s antitrust authority raided the offices of Apple and Amazon after a local retailer complained they’d been banned from re-selling products on the online marketplace. The companies are suspected of unfairly curbing the sale of Apple’s Beats products by anyone other than members of its official reseller program. (Nate Lanxon and Natalia Drozdiak / Bloomberg)
The Senate Commerce communications, technology, innovation and the internet subcommittee has a hearing scheduled for July 28th to discuss the PACT Act from Sens. John Thune (R-S.D.) and Brian Schatz (D-Hawaii). The bill is an attempt to curb Section 230 protections for big tech companies. (Ashley Gold / Axios)
Bangladesh’s regulator ordered telecom operators to stop providing free access to social media companies. Over the last 10 years, companies like Facebook and Twitter have struck partnerships, known as zero-rating deals, with telecom operators to make their services free to users to accelerate growth. (Manish Singh / TechCrunch)
Ellen Pao, who briefly led Reddit and was the subject of a series of targeted harassment campaigns, has some suggestions for how Facebook can deal with hate on its platform. First up? Stop outsourcing content moderation and make moderators employees. (Julia Carrie Wong / The Guardian)
The three brothers behind MeidasTouch — a media company turned super PAC — are successfully spinning up viral content for social networks by using Trump’s own words against him. Recent videos targeting Donald Trump Jr. and Ivanka Trump have more than 14 million views combined on Twitter. (Gregory Krieg and Ryan Nobles / CNN)
Inside China’s global propaganda apparatus. The Chinese Communist Party uses both broadcast and social media to try and shape the narrative, including leveraging influencers with millions of followers to spread messages on social platforms. So far, it has been less effective than Russia. (Stanford Internet Observatory)
⭐ A group of US investors in ByteDance is considering buying control of TikTok as a way of dealing with a possible US ban of the app. Juro Osawa and Tom Dotan at The Information have the story:
Under the idea being discussed, a group of existing ByteDance shareholders, possibly including Sequoia Capital, General Atlantic, SoftBank and New Enterprise Associates, would collectively buy the majority stake, said people familiar with the idea. ByteDance may continue to own a minority stake, although probably without board representation. The investors’ hope would be to take TikTok public at some point in the future so they could eventually sell their stake, the people said.
Several issues remain unresolved, and it isn’t clear if all investors are behind the idea or whether it is being driven by a handful of them. One question is whether the government would allow ByteDance to continue owning even a minority stake. Another is whether the investors would buy all of TikTok, just the U.S. operations, or the U.S. and India operations together. TikTok has already been banned in India, its biggest market by users, due to tensions between the Chinese and Indian governments.
Snap had a strong second quarter despite the ongoing pandemic. The company’s user base is up by 9 million people daily over last quarter, and its revenue is up 17 percent year over year, at $454 million. (Ashley Carman / The Verge)
Snap launched an internal investigation into allegations of racism and sexism at the company. The move comes after former employees spoke out in June about a lack of diversity at the company, and what they called biased editorial practices.
Facebook is adding an extra layer of security to Messenger. Now, the app can be locked on or shortly after closing, and require Face ID or Touch ID to open back up. (Jacob Kastrenakes / The Verge)
WhatsApp is piloting a project to offer credit, insurance and pension products to low-income people in India. The move is part of Facebook’s push to get into the digital payments space in India. (Manish Singh / TechCrunch)
Joe Rogan and Wall Street Journal writer Abigail Shrier equated being transgender to having anorexia, joining a cult, and “demonic possession” in a recent episode of Rogan’s show uploaded to YouTube. Rogan is one of the most influential podcast hosts in the world, and his podcast was just acquired by Spotify. Does Spotify even have a policy team? Does it have community guidelines? Expect a lot more reporting like this in coming months. (Alex Paterson / Media Matters)
Thorn, a nonprofit that builds tools to fight child sexual exploitation, launched a tool on to help small- and medium-size tech companies find, remove and report child sexual abuse material. The tool, called Safer, is designed for companies that don’t have the resources to build their own systems. Every company that lets users upload media to its servers needs to employ tech like Thorn has built here. (Olivia Solon / NBC)
Things to do
Stuff to occupy you online during the quarantine.
Endlessly doomscroll. “Doomscrolling” refers to the behavior of helplessly thumbing through social media feeds for fear of missing some even more catastrophic news than the events you already know about. Ben Grosser’s latest social media art project invites you to scroll through an infinite series of disturbing partial headlines: “Outbreak Grows,” “Curve Rising Too Quickly,” “Experts Say Future Uncertain.”