Skip to main content

How Grindr became a national security issue

How Grindr became a national security issue

/

What happened when a Chinese company acquired the horniest social network

Share this story

Serial Killer Conviction Prompts Police To Warn Of Dating App Dangers
Photo by Leon Neal/Getty Images

Grindr is an app used primarily by gay men to find hookups in their immediate vicinity. With more than 27 million users, it’s so popular among its target audience that it has basically defined gay life for the past decade. In 2016, the American-made app was sold to a Chinese company called Beijing Kunlun Tech Co Ltd. And in an extraordinary move today first reported by Reuters, the US government is now forcing Kunlun to sell the app on national security grounds.

Carl O’Donnell, Liana B. Baker, Echo Wang report:

The Committee on Foreign Investment in the United States (CFIUS) has informed Kunlun that its ownership of West Hollywood, California-based Grindr constitutes a national security risk, the two sources said.

CFIUS’ specific concerns and whether any attempt was made to mitigate them could not be learned. The United States has been increasingly scrutinizing app developers over the safety of personal data they handle, especially if some of it involves U.S. military or intelligence personnel.

Last year Kunlun announced plans to do an initial public offering for Grindr. But CFIUS intervened, Reuters reported, and now Kunlun is attempting to sell it off.

How did the world’s horniest social network become a national security issue? CFIUS wouldn’t comment — as one source tells Reuters, “doing so could potentially reveal classified conclusions by U.S. agencies.” But as a former Grindr user, I have some... informed speculation to share!

One, Grindr owns some of the most sensitive data about its users that a social network ever could: the filthiest chats they’ve ever sent, nude photos and videos, and also their real-time location, measured within yards. That’s all connected to a user’s email address, from which a user’s true identity might be easily learned.

The Chinese government has likely taken a significant interest in that data, which could be useful in targeting dissidents at home and for blackmail abroad. As a Chinese company, there is likely nothing Kunlun could do to prevent the government from accessing user data.

Two, as the Reuters story hints at, Grindr attracts users of all sorts — including members of the US military and likely its intelligence agencies. I can’t be the only Grindr user to have seen other users on the grid in military uniforms. It feels like only the slightest stretch to imagine China scouring the Grindr grid to understand American troop movements.

And if that sounds crazy, a dumb social app has given away troop movements before. Here’s Alex Hern, writing in The Guardian in 2018:

Sensitive information about the location and staffing of military bases and spy outposts around the world has been revealed by a fitness tracking company.

The details were released by Strava in a data visualisation map that shows all the activity tracked by users of its app, which allows people to record their exercise and share it with others.

Strava, of course, is an app that lets people track their runs and bicycle rides. When the company posted a map of popular routes for running and cycling, it inadvertently gave away national secrets. It eventually began allowing people to opt out of sharing their location.

With Grindr, of course, sharing your location is the whole point. The app orders your potential matches using only one criterion — how physically close they are to you. It’s easy to imagine Chinese intelligence scouring the app for potential military users for any number of reasons.

It would be nice if the government took such a strong interest in data privacy in cases involving something other than national security. I stopped using Grindr in 2017 in part because I couldn’t imagine anything good coming out of having my location known to the Chinese government. But even if it took a military issue to grab regulators’ attention, I’m glad that in this case, they did the right thing.

Democracy

Facebook Bans White Nationalism and White Separatism

Facebook previously allowed white nationalist speech on the platform, but is changing its policy to prevent it, Joseph Cox and Jason Koebler reported. The company will also begin directing people who search for certain white-nationalist terms to a nonprofit group that works to de-radicalize people.

“We’ve had conversations with more than 20 members of civil society, academics, in some cases these were civil rights organizations, experts in race relations from around the world,” Brian Fishman, policy director of counterterrorism at Facebook, told us in a phone call. “We decided that the overlap between white nationalism, [white] separatism, and white supremacy is so extensive we really can’t make a meaningful distinction between them. And that’s because the language and the rhetoric that is used and the ideology that it represents overlaps to a degree that it is not a meaningful distinction.”

Specifically, Facebook will now ban content that includes explicit praise, support, or representation of white nationalism or separatism. Phrases such as “I am a proud white nationalist” and “Immigration is tearing this country apart; white separatism is the only answer” will now be banned, according to the company. Implicit and coded white nationalism and white separatism will not be banned immediately, in part because the company said it’s harder to detect and remove.

Ten European lawmakers say they voted against pivotal copyright amendment by accident

Here’s an unbelievable story from my colleague James Vincent on this week’s vote on the European Union’s Copyright Directive:

Official voting records published by the EU show that 13 MEPs have declared they accidentally voted the wrong way on this amendment. According to the record, 10 MEPs say they accidentally rejected the amendment when they meant to approve it, two MEPs accidentally approved the amendment, and one MEP says he intended not to vote at all.

If these MEPs had voted as they said they meant to, the amendment would have been approved by a slim majority. Then there would have been further votes on whether the law would include Articles 11 and 13 (renamed articles 15 and 17 in the final draft), though no one can say how those would have gone.

Fearful of fake news blitz, U.S. Census enlists help of tech giants

The US Census Bureau is asking Google, Facebook, and Twitter to help disrupt efforts to dissuade people from participating in the 2020 count, Nick Brown reports.

The push, the details of which have not been previously reported, follows warnings from data and cybersecurity experts dating back to 2016 that right-wing groups and foreign actors may borrow the “fake news” playbook from the last presidential election to dissuade immigrants from participating in the decennial count, the officials and sources told Reuters.

The sources, who asked not to be named, said evidence included increasing chatter on platforms like “4chan” by domestic and foreign networks keen to undermine the survey. The census, they said, is a powerful target because it shapes U.S. election districts and the allocation of more than $800 billion a year in federal spending.

Facebook says it’s limiting false stories for India election

Rishabh R. Jain checks in with Facebook ahead of India’s next election, which takes place in May.

Calling the Indian elections a “top priority,” Samidh Chakrabarti, director of Facebook’s Product Management for Civic Integrity division, said the company has put in a “tremendous amount of efforts over the last two years” to prepare for the polls.

He said Facebook has partnered with Indian media organizations to check and flag false stories in English, Hindi and some other regional Indian languages.

Google creates external advisory board to monitor it for unethical AI use

Here’s a move somewhat reminiscent of — although far less ambitious than — Facebook’s plan to create an independent review board for content moderation decisions. It’s another case where a tech giant is seeking to devolve at least some amount of power to an outside group. (A cynic might say it’s also a way to get a fig leaf of credibility for a technology whose consequences we will almost surely not be able to predict or control.) Anyway here’s Nick Statt:

Google today announced a new external advisory board to help monitor the company’s use of artificial intelligence for ways in which it may violate ethical principles it laid out last summer. The group was announced by Kent Walker, Google’s senior vice president of global affairs, and it includes experts on a wide-ranging series of subjects, including mathematics, computer science, engineering, philosophy, public policy, psychology, and even foreign policy.

The group will be called the Advanced Technology External Advisory Council, and it appears Google wants it to be seen as a kind of independent watchdog keeping an eye on how it deploys AI in the real world, with a focus on facial recognition and the mitigation of built-in bias in machine learning training methods. “This group will consider some of Google’s most complex challenges that arise under our AI Principles … providing diverse perspectives to inform our work,” Walker writes.

Apple’s push into subscriptions raises new competition concerns, antitrust experts say

Cat Zakrzewski looks into whether Apple’s new subscription products will help regulators build an antitrust case against it:

Apple’s own history also raises competition concerns as it pushes into new services, experts tell me. Generally, a company’s entrance to new areas of business results in more competition in the marketplace, said Chris Sagers, a professor of law at Cleveland State University. But there can be problems when a company does things to “ease its entry that restrain existing competition.”

“In fact, Apple has built up a bit of a record of conduct showing that Apple’s entry is not always good,” Sagers told me.

Elsewhere

Don’t Change Your Birth Year To 2007 On Twitter, Or You’ll Get Blocked Like Me

“On Monday, some Twitter users began circulating a rumor that changing your birth year to 2007 on the social media service would unlock new color schemes.” So begins Ryan Mac’s chilling tale of falling for a prank and getting locked out of Twitter. You have to be 13 to use the service — and if you change your birth year to 2007, Twitter will block your access.

Twitter started automatically blocking users who list themselves as under 13 when the European Union implemented its General Data Protection Regulation (GDPR) last May, a company spokesperson told me — but not before he confirmed that I was actually blocked and had a long, hearty laugh to himself. GDPR, a set of laws that are meant to give users more control of their data, requires that children obtain verifiable consent from a parent or guardian to use internet services or visit websites that process personal information.

Facebook rolls out ‘Whitehat Settings’ to help bug hunters analyze traffic in its mobile apps

Catalin Cimpanu reports that Facebook is offering a new setting to security researchers that makes it easier to analyze traffic in its mobile apps for bad behavior. I don’t fully understand what this setting enables, so let me know if you have thoughts!

Launches

Twitch launches a four-person ‘Squad Stream’ feature to help creators get discovered

Twitch now lets four creators stream from a single window, which could help them get discovered if they’re collaborating with users with larger or different audiences than their own.

Takes

Can We Block a Shooter’s Viral Aspirations?

Charlie Warzel is asked whether Facebook should put live streams on a tape delay to discourage acts like the New Zealand shooting:

I’ve seen this “tape delay” idea debated in the last few days and it’s an interesting one. In practice, though, it seems to be quite difficult to carry out. For example, do you add an upload lag to all videos or just those from certain accounts? If it’s all videos, does that mean the videos ought to be flagged by artificial intelligence for potential violence? On Wednesday evening, Facebook argued that its flagging systems, which are adequate for screening and catching nudity and certain violent imagery, would most likely deliver false positives on more innocuous videos as well.

So what about human moderators? The sophistication of the internet’s worst communities seems to necessitate human moderation to parse the innocent pranks from the insidious trolling. Well-trained moderators with adequate time to pore over videos could suss out satire from hate speech and parse cultural standards and norms that might cause a video to be innocuous in one region and deeply offensive in another. But, as some great reporting has revealed recently, moderators tend to be outside contractors subjected daily to torrents of psychologically traumatizing content, often without the support or pay they deserve. Rather than spend time with a video, they’re forced to pass judgment in a matter of seconds. Still, they’re far more expensive than an algorithm and far less efficient, which is why tech companies tend to prefer deeply imperfect A.I. solutions.

And finally ...

Why Is Silicon Valley So Obsessed With the Virtue of Suffering?

Nellie Bowles has a hilarious piece about the lengths to which Silicon Valley titans will go to make themselves uncomfortable:

“We’re kept in constant comfort,” said Kevin Rose, the founder of Digg, in an interview on Daily Stoic, a popular blog for the tech-Stoic community. Mr. Rose said he tries to incorporate practices in his life that “mimic” our ancestors’ environments and their daily challenges: “This can be simple things like walking in the rain without a jacket or wearing my sandals in the December snow when I take the dog out in the mornings.”

Kevin you are going to catch a cold!!

Talk to me

Send me tips, comments, questions, and Grindr chats: casey@theverge.com.