Skip to main content

TikTok’s biggest problem is outside its control

TikTok’s biggest problem is outside its control


Ultimately, the fate of the ByteDance-owned app belongs to two global superpowers

Share this story

Illustration by Alex Castro / The Verge


Last week I wrote about some of the forces putting the squeeze on TikTok — and, uncharacteristically for me when I write about TikTok, the course of events did not immediately reverse and put TikTok into a stronger position. Instead, by several measures, the situation for ByteDance’s popular video app got significantly worse.

For starters, Peter Navarro, an adviser to the president, said in an interview with Fox News on Sunday that he expects President Trump will take “strong action” against TikTok and a fellow Chinese-made social app, WeChat. Worse from ByteDance’s perspective is that Navarro said the United States will not back down even if TikTok is sold to an American buyer. Here’s Bloomberg:

The Trump administration is “just getting started” with the two apps, and he would not rule out the US banning them, Navarro said on Fox News on Sunday. Even if TikTok is sold to an American buyer, it would not solve the problem, he said.

“If TikTok separates as an American company, that doesn’t help us,” Navarro said. “Because it’s going to be worse – we’re going to have to give China billions of dollars for the privilege of having TikTok operate on US soil.”

At the root of this concern is that no matter what ByteDance says about TikTok’s independence from Chinese governance, ultimately it must do whatever the country’s brutal, repressive authoritarian regime demands. Russell Brandom examined American anxieties about the app in The Verge:

For experts, the concern is less about mass data collection and more about targeted operations that are harder to detect. Because TikTok maintains the standard level of invasive app access, the Chinese intelligence services could potentially use it as a portal to surveil specific users or gather compromising information. The FBI has already raised the alarm about Chinese spies stealing US trade secrets, so that same access is even scarier for Amazon or Wells Fargo, which might plausibly have proprietary tech that China wants to steal. As long as the Chinese government can put pressure on TikTok through its ownership, there will be ways to snoop on users without raising alarms. That makes it hard for high-risk users to feel entirely safe, no matter what the app does.

Anxiety over foreign interference has reared its head before. As recently as April, Zoom was caught rerouting external video calls through China, a behavior far more serious than anything we’ve seen from TikTok. Equifax lost data from more than 100 million people (possibly working for Russia, depending on who you believe), which is certainly more information than TikTok has ever had access to. But there’s something about TikTok’s ownership entanglement that makes it harder to forgive. Even if Zoom was careless or Equifax was outmatched, there’s a belief that they’re still fighting on the right side. But political pressure can’t be fixed with security audits. If you believe TikTok is collaborating with Chinese intelligence services, there’s simply nothing the company can do to reassure you.

The other fear is that China will influence ByteDance, either directly indirectly, to push a worldview that embraces censorship and political oppression on America and the world at large. This is not an abstract fear — we have already seen it happen with content related to the NBA and Hong Kong, as Ben Thompson documented last year. (TikTok says NBA content may not have appeared in those searches due to issues with language and localization, but were not actively removed from the platform.) And censorship on the app still appears to reflect a Chinese worldview far more than it reflects an American one; only recently did the app’s censors begin allowing people with large tattoos, and what the company said was a bug temporarily appeared to hide content related to Black Lives Matter. (The content was visible but a bug made the view count appear to be zero.) It’s not a stretch to imagine Beijing eventually using TikTok to distribute propaganda — and without leaving any fingerprints, either.

And so: there are bans. Amazon emailed employees telling them to delete the app from corporate phones, then backtracked. Then Wells Fargo banned the app from corporate devices, and stuck to it. The Democratic and Republican national committees have now both told staffers not to install the app on their phones for fear that TikTok could be sending back unspecified data to the Chinese government.

What sort of data? Well, more researchers have been looking into that. At the Washington Post, Geoffrey Fowler asked Patrick Jackson of privacy company Disconnect to take a look. “TikTok doesn’t appear to grab any more personal information than Facebook,” Fowler writes. “That’s still an appalling amount of data to mine about the lives of Americans. But there’s scant evidence that TikTok is sharing our data with China.” He goes on:

Jackson, from Disconnect, said the app sends an “abnormal” amount of information from devices to its computers. When he opened TikTok, he found approximately 210 network requests in the first nine seconds, totaling over 500 kilobytes of data sent from the app to the Internet. (That’s equivalent to half a megabyte, or 125 pages of typed data.) Much of it was information about the phone (like screen resolution and the Apple advertising identifier) that could be used to “fingerprint” your device even when you’re not logged in.

And there is a hole in our ability to verify all of what TikTok does. Jackson said the app uses some technical measures to encode its activity, meaning some of it is hidden from independent researchers looking under the covers. “In order to disrupt hackers and those who wish to manipulate the app, we use obfuscation to help reduce automated attacks, like bots,” [a spokeswoman] said.

Which basically leaves us back where we started: with no evidence TikTok is doing anything extraordinarily shady with our data, and no evidence it could stop the Chinese government from forcing it to at any point.

Perhaps realizing that the app may be caught up in an intractable conflict between global superpowers, TikTok stars have begun to panic. In the New York Times, Taylor Lorenz finds young people worried about losing a key outlet for creative freedom during months of quarantine — and also, for some number of them, their livelihoods.

Influencers who watched the fall of Vine, another popular short-form video app, in 2016 learned the importance of diversifying one’s audience across platforms. But even for TikTok’s biggest stars, moving an audience from one platform to another is a huge undertaking.

“I have 7 million followers on TikTok, but it doesn’t translate to every platform,” said Nick Austin, 20. “I only have 3 million on Instagram and 500,000 on YouTube. No matter what it’s going to be hard to transfer all the people I have on TikTok.”

ByteDance is reportedly considering all manner of proposed solutions to keep TikTok alive around the world — it’s expected to generate $500 million in revenue this year, after all. But it seems clear that whatever happens to TikTok, ByteDance itself won’t be in control of the outcome.

And that, of course, has been TikTok’s problem all along.


Facebook is considering a ban on political ads in the days leading up to the US election, Kurt Wagner reports at Bloomberg. In some quarters, this was received as a capitulation to vocal calls for the company to ban political ads altogether. In my view, it’s less a full-scale retreat than a reasonable balancing of equities. Politicians get access to Facebook’s ad platform for the vast majority of the campaign — and for the most part, their lies will still not be subject to fact-checking.

But in the waning days of the campaign, candidates will have to turn elsewhere for paid promotion. That reduces the chances that a particularly vile ad goes massively viral before it can be removed, or before the free press can fact-check it and distribute any articles intended to debunk it.

It may also make life harder for challengers against well known incumbents, who could have used the final promotional push that Facebook ads provide. (Democrats and Republicans have been equally concerned about this outcome in the past.)

At the same time, they’ll still be able to post on their own pages where it seems to me they will be at just as great a risk of saying something terrible as they would in an ad. And those posts might get even wider distribution than their ads, if history is any guide.

By this point, I’m more or less persuaded that an ad blackout in the days before the election — of the sort that is already common in the Australia — is the right thing to do. But I remain unconvinced it will make any significant difference in the basic logic of campaigning.

Pushback: the Clubhouse rules

I heard from some frustrated venture capitalists last week after I wrote about Clubhouse, an audio-only social app currently in closed beta. At that point Clubhouse had no in-app mechanisms for reporting harassment, and its community guidelines were little more than legal boilerplate. But wasn’t I being a little too harsh on the co-founders, some of you wanted to know? Clubhouse has just two full-time employees; is this really the time to beat them up over trust and safety issues?

My answer is that this is precisely the time to start thinking about trust and safety. For too long Silicon Valley social apps punted on these questions until they were bona fide crises. I believe community standards are something that an app should launch with, rather than wait to develop until their first content moderation crisis. If that’s me being “unreasonable,” it feels like the kind of unreasonable I can feel good about.

In any case, I was heartened to see that over the weekend Clubhouse wrote a blog post about their dramatic week and posted some community guidelines. The guidelines are at times comically naive — how, exactly, does an app that essentially consists of unlimited live phone calls intend to ensure that users “not spread false information”?

But you’ve got to start somewhere, and I’m glad Clubhouse did.

Pushback: Facebook’s size

Writing about Facebook last week, I said something I say a lot, which is that Facebook’s problems with hate speech and civil rights violations would be smaller if Facebook itself were smaller. Not everyone agrees with me. Particularly people who work at Facebook, but also other people. One of them (and there were others!) is friend of the newsletter Evelyn Douek. She writes:

I agree the size matters and I probably think it should be broken up for other reasons, but I’m not sure it really solves any of the content moderation problems. First, I think there’s no putting these concerns or the scale of the internet back in the tube. People will find ways to share information across networks; some of it will be awful. [...] The problems might be less extreme, and it sure would be nice to stop it being Mark’s Choice (although there are other ways to do that too...), but I don’t think they go away. Second, we have the same concerns now about other, smaller platforms too. E.g., Twitter. And we worry about where the extremists go when we knock them off Facebook. Again, maybe we’re not as concerned, but I think the fundamental problem of how and who decides what content can be online remains, regardless of size.

I think Reddit is the strongest argument against decentralization solving it all. In the end, we needed a powerful, centralized gatekeeper to come in and be a chokepoint. The thing about powerful gatekeepers is they have power!

Points taken — but I don’t know. The thing about Facebook is that it doesn’t just host hate speech, it (almost always unwittingly) recruits new adherents for that ideology through algorithmic promotion of emotionally charged posts and virulent right-wing groups. At the end of the day, a smaller Facebook — which is to say, a Facebook that does not include Instagram or WhatsApp — has fewer potential recruits. If there’s an argument that a smaller Facebook would somehow make our global hate speech problem worse, I still have yet to read it.

The Ratio

Today in news that could affect public perception of the big tech platforms

⬆️ Trending up: Apple has allocated $400 million of a planned $2.5 billion to fight California’s housing crisis. The first announced project will create ... 250 units of housing.

🔃Trending sideways. Google said it will invest $10 billion “in India’s digital future”. I’m pretty sure this mostly means “creating new Google customers,” but we’ll see!

⬇️ Trending down: Pinterest is “hiding” rather than removing content that violates its policies, but much of it remains discoverable — and is being actively promoted via algorithmic recommendations. (Sarah Emerson / OneZero)


California is reportedly planning its own antitrust investigation of Google. The company’s home was conspicuously absent from the list of 48 states — along with Puerto Rico and the District of Columbia — that are taking part in a separate investigation led by Texas. Leah Nylen reports:

The California antitrust probe is a separate investigation from the multi-state effort, two of the individuals said. All of the individuals spoke on condition of anonymity to talk openly about a confidential probe. Alabama is now the only state that is not investigating the company.

It was not immediately clear what aspect of Google’s business California is targeting. A spokesperson for California Attorney General Xavier Becerra declined to comment. 

Google will likely argue that it does not control enough of the advertising industry to effectively set rates or outmaneuver its competitors. The company argued as much in a recent document sent to Australian regulators. (David McCabe / New York Times)

Tech giants joined a lawsuit against the Trump administration over its efforts to strip foreign students of their visas. “In an amicus brief filed Monday, the U.S. Chamber of Commerce, as well as Facebook, Google, Twitter, Salesforce, Microsoft and more, sided with Harvard and MIT in their lawsuit against the U.S. government.” (Emily Birnbaum / Protocol

The Supreme Court will hear a Facebook robocalling case. “Facebook was sued in 2015 by non-Facebook user Noah Duguid, who complained that he’d been receiving unwanted text messages from the site.” (Adi Robertson / The Verge)

The three most prominent U.S. anti-vaccination organizations — National Vaccine Information Center, Children’s Health Defense, and Informed Consent Action Network — are using Facebook and other major social media platforms to lay the groundwork for widespread coronavirus vaccine rejection.” (Timothy Johnson / Media Matters)

Google may be able to stave off a full-scale EU antitrust investigation into its planned $2.1 billion bid for Fitbit by pledging not to use Fitbit’s health data to help it target ads, people familiar with the matter said.” (Foo Yun Chee / Reuters)

The United States is retaliating against France’s imposition of a “tech tax” with new tariffs. If you’re relying on French cosmetics, soap and handbags, to get through quarantine, they’re about to become more expensive. (Jim Tankersley / New York Times)

Law enforcement authorities are using a company called Dataminr to help monitor social media related to protests. Dataminr relies on Twitter to provide the full firehose of tweets, raising questions about its complicity in surveillance. (Sam Biddle / The Intercept)

An unexpected bottleneck in COVID-19 response: the United States’ dependence on fax machines. (Sarah Kliff and Margot Sanger-Katz / New York Times)

The president of the Coalition for a Safer Web is calling for a new “social media standards board.” Marc Ginsberg says social networks should lose Section 230 immunity if they fail to comply with its regulations. (The Hill)

Social platforms are authoritarian spaces dressed up in borrowed democratic language,” John Herrman writes in an essay about tech platforms insistence on calling themselves “communities. “Their policies and rules are best understood as attempts at automation. Stylistically, they’re laws. Practically, and legally, they’re closer to software updates.” (New York Times)

Signal is storing user data on company servers for the first time, worrying security-minded users. The company says the move will let users communicate without sharing phone numbers, which is the status quo and raises security concerns of its own. Everything is tradeoffs! (Lorenzo Franceschi-Bicchierai / Vice)

Twitch lifted its temporary ban on President Trump. The president was prevented from streaming after airing an old rally in which he called Mexican immigrants criminals and “rapists.” (Jake Kastrenakes / The Verge)

Trump just made his 20,000th false or misleading claim since becoming president. That is not a typo. (Glenn Kessler, Salvador Rizzo and Meg Kelly / Washington Post)

Rascally TikTok teens are trying to pull one over on the president again. They’re mad Trump has suggested the app be banned, and have been review-bombing the president’s official apps. (Shelly Banjo and Misyrlena Egkolfopoulou / Bloomberg)

India’s Supreme Court approved sending legal summonses via WhatsApp, Telegram and email. Better check that “other inbox”! (Abhimanyu Sharma / Times Now News)


The pandemic has been an absolute gift to mobile apps. Here’s Sarah Perez in TechCrunch:

As the world continued to cope with the impact of the coronavirus outbreak, the second quarter of 2020 became the largest yet for mobile app downloads, usage and consumer spending. According to new data from app store intelligence firm App Annie, mobile app usage grew 40% year-over-year in the second quarter of 2020, even hitting an all-time high of over 200 billion hours during April. Consumer spending in apps, meanwhile, hit a record high of $27 billion in the second quarter. And app downloads reached a high of nearly 35 billion.

The growth in app usage has been fueled by social distancing and lockdown measures, as countries around the world try to quell the spread of the novel coronavirus.

Spotify, Pinterest, and Tinder went down Friday after another bug in Facebook’s login SDK. A nearly identical problem led to similar issues in May. (James Vincent / The Verge)

After this woman was diagnosed with cancer, her Facebook feed came to be dominated by ads for phony “alternative” cures. “Interestingly, I haven’t seen any legitimate cancer care ads in my News Feed,” she writes, “just pseudoscience.” (Anne Borden King / New York Times)

Instagram banned gay conversion therapy ads on Friday. Raising the question: you could post gay conversion therapy ads on Instagram? (Rob Picheta / CNN)

If you spend enough money on Facebook ads you can have dinner at Sheryl Sandberg’s house. How the company’s efforts to build goodwill among its clients helped propel it to $71 billion in ad sales. (Hannah Murphy / Financial Times)

Ime Archibong runs Facebook’s New Product Experimentation group and has become one of the highest-ranking Black leaders at the company. He’s also one of the company’s best-liked leaders, period. (Sal Rodriguez / CNBC)

Twitter debates feel unproductive because no one is arguing in good faith any more, this piece argues. Instead, people are skipping to the assumed end of the debate right away and trading ad hominem attacks. (Lili Loofbourow / Slate)

Google’s campus security system subjected Black and Latinx workers to bias and prompted complaints to management, according to people familiar with the situation, leading the company to scrap a key part of the approach.” (Nico Grant / Bloomberg)

Things to do

Watch TikToks of coronavirus cases rising and falling represented as rollercoaster rides. Hats off to the project’s creator, 17-year-old Aidan Carroll.

Read Subtweets. Every single issue to date has been a gift. This one has essential reading on, among other things, Cake Twitter.

Those good tweets

Talk to us

Send us tips, comments, questions, and political ads: and