Skip to main content

Three takeaways from a visit to TikTok’s new transparency center

Three takeaways from a visit to TikTok’s new transparency center

/

It’s a noble effort to build trust in social networks — but can it help the company survive beyond November?

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Tessellated TikTok logos against a dark background.
Illustration by Alex Castro / The Verge

In July, amid increasing scrutiny from the Trump administration, TikTok announced a novel effort to build trust with regulators: a physical office known as the Transparency and Accountability Center. The center would allow visitors to learn about the company’s data storage and content moderation practices, and even to inspect the algorithms that power its core recommendation engine.

“We believe all companies should disclose their algorithms, moderation policies, and data flows to regulators,” then-TikTok CEO Kevin Mayer said at the time. “We will not wait for regulation to come.”

Regulation came a few hours later. President Trump told reporters on Air Force One that he planned to ban TikTok from operating in the United States, and a few days later he did. The president set a deadline for ByteDance to sell TikTok by September 15th — that is, this coming Tuesday — and Mayer quit after fewer than 100 days on the job. (The deadline has since been changed to November 12th — but also Trump said today that the deadline is also still Tuesday? Help?)

With so much turmoil, you might expect the company to set aside its efforts to show visitors its algorithms, at least temporarily. But the TikTok Transparency and Accountability Center is now open for (virtual) business — and on Wednesday I was part of a small group of reporters who got to take a tour over Zoom.

Much of the tour functioned as an introduction to TikTok: what it is, where it’s located, and who runs it. (It’s an American app, located in America, run by Americans, was the message delivered.) We also got an overview of the app’s community guidelines, its approach to child safety, and how it keeps data secure. All of it is basically in keeping with how American social platforms manage these concerns, though it’s worth noting that 2-year-old TikTok built this infrastructure much faster than its predecessors did.

More interesting was the section where Richard Huang, who oversees the algorithm responsible for TikTok’s addictive For You page, explained to us how it works. For You is the first thing you see when you open TikTok, and it reliably serves up a feed of personalized videos that leaves you saying “I’ll just look at one more of these” for 20 minutes longer than you intended. Huang told us that when a new user opens TikTok, the algorithm fetches eight popular but diverse videos to show them. Sara Fischer at Axios has a nice recap of what happens from there:

The algorithm identifies similar videos to those that have engaged a user based on video information, which could include details like captions, hashtags or sounds. Recommendations also take into account user device and account settings, which include data like language preference, country setting, and device type.

Once TikTok collects enough data about the user, the app is able to map a user’s preferences in relation to similar users and group them into “clusters.” Simultaneously, it also groups videos into “clusters” based on similar themes, like “basketball” or “bunnies.”

As you continue to use the app, TikTok shows you videos in clusters that are similar to ones you have already expressed interest in. And the next thing you know, 80 minutes have passed.

Eventually the transparency center will be a physical location that invited guests can visit, likely both in Los Angeles and in Washington, DC. The tour will include some novel hands-on activities, such as using the company’s moderation software, called Task Crowdsourcing System, to evaluate dummy posts. Some visitors will also be able to examine the app’s source code directly, TikTok says.

I think this is great. Trust in technology companies has been in decline, and allowing more people to examine these systems up close feels like a necessary step toward rebuilding it. If you work at a tech company and ever feel frustrated by the way some people discuss algorithms as if they’re magic spells rather than math equations — well, this how you start to demystify them. (Facebook has a similar effort to describe what you’ll find in the News Feed here; I found it vague and overly probabilistic compared to what TikTok is offering. YouTube has a more general guide to how the service works, with fairly sparse commentary on how recommendations function.)

Three other takeaways from my day with TikTok:

TikTok is worried about filter bubbles. Facebook has long denied that it creates filter bubbles, saying that people find a variety of diverse viewpoints on the service. That’s why I was interested to hear from TikTok executives that they are quite concerned about the issue, and are regularly refining their recommendation algorithm to ensure you see a mix of things. “Within a filter bubble, there’s an informational barrier that limits opposing viewpoints and the introduction of diverse types of content,” Huang said. “So, our focus today is to ensure that misinformation and disinformation does not become concentrated in users’ For You page.”

The problems are somewhat different on the two networks — Facebook is primarily talking about ideological diversity, where TikTok is more concerned with promoting different types of content — but I still found the distinction striking. Do social networks pull us into self-reinforcing echo chambers, or don’t they?

TikTok is building an incident command center in Washington, DC. The idea is to be able to identify critical threats in real time and respond quickly, the company said, which feels particularly important during an election year. I don’t know how big a deal this is, exactly — for the time being, it sounds like it could just be some trust and safety folks working in a shared Slack channel? But the effort does have an undeniably impressive and redundant official name: a “monitoring, response and investigative fusion response center.” OK! 

You can’t prove a negative. TikTok felt compelled to design these guided tours amid fears that the app would be used to share data with Chinese authorities or promote Communist Party propaganda to Americans. (Ben Thompson has a great, subscribers-only interview with the New York Times’ Paul Mozur that touches on these subjects today.) The problem with the tour, though, is that you can’t show TikTok not doing something. And I wonder if that won’t make the transparency center less successful than the company hoped.

I asked Michael Beckerman, a TikTok vice president and head of US public policy, about that challenge.

“That’s why we’re trying to be even more transparent — we’re meeting and talking to everybody that we can,” Beckerman told me. “What a lot of people are saying — people that are really well read into global threats — is that TikTok doesn’t rank. So if you’re spending too much time worrying about TikTok, what are you missing?”

Oh, I can think of some things.

Anyway, TikTok’s transparency center is great — a truly forward-leaning effort from a young company. Assuming TikTok survives beyond November, I’d love to visit it in person sometime.

The Ratio

Today in news that could affect public perception of the big tech platforms.

🔼 Trending up: Google is giving more than $8.5 million to nonprofits and universities using artificial intelligence and data analytics to better understand the coronavirus crisis, and its impact on vulnerable communities. (Google)

Governing

Russian government hackers have targeted 200 organizations tied to the 2020 presidential election in recent weeks, according to Microsoft’s threat intelligence team. China has also launched cyberattacks against “high-profile individuals” linked to Joe Biden’s campaign, while Iranian actors have targeted people associated with President Trump’s campaign. Dustin Volz at The Wall Street Journal has the story:

Most of the attempted intrusions haven’t been successful, and those who were targeted or compromised have been directly notified of the malicious activity, Microsoft said. Russian, Chinese and Iranian officials didn’t immediately respond to a request for comment.

The breadth of the attacks underscore widespread concerns among U.S. security officials and within Silicon Valley about the threat of foreign interference in the presidential election less than two months away. [...]

The Russian actor tracked by Microsoft is affiliated with a military intelligence unit and is the same group that hacked and leaked Democratic emails during the 2016 presidential contest. In addition to political consultants and state and national parties, its recent targets have included advocacy organizations and think tanks, such as the German Marshall Fund, as well as political parties in the U.K., Microsoft said.

What’s the worst thing that could happen the night of the US presidential election? Experts have a few ideas. Misinformation campaigns about voter fraud, disputed results, and Russian interference are all possible scenarios. (The New York Times)

Voting machines have a bad reputation, but most of their problems are actually pretty minor and unlikely to impair a fair election. They’re often the result of ancient technology — not hacking. (Adrianne Jeffries / The Markup)

Google said it will remove autocomplete predictions that seem to endorse or oppose a candidate or a political party, or that make claims about voting. The move is an attempt to improve the quality of information available on Google before the election. (Anthony Ha / TechCrunch)

Trump is considering nominating a senior adviser at the National Telecommunications and Information Administration — who helped draft the administration’s social media executive order — to the Federal Communications Commission. Nathan Simington is known for supporting Republicans’ “bias against conservatives” schtick, and helped to craft a recent executive order about social media. (Makena Kelly / The Verge)

A network of Facebook pages is spreading misinformation about the 2020 presidential election, funneling traffic through an obscure right-wing website, then amplifying it with increasingly false headlines. The artificial coordination might break Facebook’s rules. (Popular Information)

Facebook is re-evaluating its approach to climate misinformation. The company is working on a climate information center, which will display information from scientific sources, although nothing has been officially announced. It will look beautiful sandwiched in between the COVID-19 information center and the voter information center. (Sarah Frier / Bloomberg)

Facebook reviews user data requests through its law enforcement portal manually, without screening the email address of people who request access. The company prefers to let anyone submit a request and then check that it’s real, rather than block them with an automated system. (Lorenzo Franceschi-Bicchierai / Vice)

QAnon is attracting female supporters because the community isn’t as insular as other far-right groups, this piece argues. That might be a bigger factor in its ability to convert women than the save the children content. (Annie Kelly / The New York Times)

China’s embassy in the UK is demanding Twitter open an investigation after its ambassador’s official account liked a pornographic clip on the platform earlier this week. The embassy said the tweets were liked by a possible hacker who had gained access to the ambassador’s account. That’s what they all say! (Makena Kelly / The Verge)

GitHub has become a repository for censored documents during the coronavirus crisis. Internet users in China are repurposing the open source software site to save news articles, medical journals, and personal accounts censored by the Chinese government. (Yi-Ling Liu / Wired)

Brazil is trying to address misinformation issues with a new bill that would violate the privacy and freedom of expression of its citizens. If it passes, it could be one of the most restrictive internet laws in the world. (Raphael Tsavkko Garcia / MIT Technology Review)

Industry

Former NSA chief Keith Alexander has joined Amazon’s board of directors. Alexander served as the public face of US data collection during the Edward Snowden leaks. Here’s Russell Brandom at The Verge:

Alexander is a controversial figure for many in the tech community because of his involvement in the widespread surveillance systems revealed by the Snowden leaks. Those systems included PRISM, a broad data collection program that compromised systems at Google, Microsoft, Yahoo, and Facebook — but not Amazon.

Alexander was broadly critical of reporting on the Snowden leaks, even suggesting that reporters should be legally restrained from covering the documents. “I think it’s wrong that that newspaper reporters have all these documents, the 50,000-whatever they have and are selling them and giving them out as if these — you know it just doesn’t make sense,” Alexander in an interview in 2013. “We ought to come up with a way of stopping it. I don’t know how to do that. That’s more of the courts and the policymakers but, from my perspective, it’s wrong to allow this to go on.”

Facebook launched new product called Campus, exclusively for college students. It’s a new section of the main app where students can interact only with their peers, and it requires a .edu address to access. I say open it up to everyone. Worked last time! (Ashley Carman / The Verge)

Ninja returned to Twitch with a new exclusive, multiyear deal. Last August, he left Twitch for an exclusive deal with Mixer — which shut down at the end of June. (Bijan Stephen / The Verge)

The Social Dilemma, the new Netflix documentary about the ills of big tech platforms, seems unclear on what exactly makes social media so toxic. It also oversimplifies the impact of social media on society as a whole. (Arielle Pardes / Wired)

You can make a deepfake without any coding experience in just a few hours. One of our reporters just did! (James Vincent / The Verge)

Things to do

Stuff to occupy you online during the quarantine.

Choose your own election adventure. Explore some worst-case scenarios with this, uh, “fun” new game from Bloomberg.

Subscribe to The Verge’s new weekly newsletter about the pandemic. Mary Beth Griggs’ Antivirus brings you “news from the vaccine and treatment fronts, and stories that remind us that there’s more to the case counts than just numbers.”

Subscribe to Kara Swisher’s new podcast for the New York Times. The first episode of her new interview show drops later this month.

Watch The Social Dilemma. The new social-networks-are-bad documentary is now on Netflix. People are talking about it!

And finally...

Talk to us

Send us tips, comments, questions, and an overview of how your algorithms work: casey@theverge.com and zoe@theverge.com.