Skip to main content

Answering the 12 biggest questions about Apple and Google’s new coronavirus tracking project

What the technical documents tell us about the project’s privacy and security measures

Share this story

SINGAPORE-HEALTH-VIRUS
A contact tracing app distributed in Singapore
Photo by CATHERINE LAI/AFP via Getty Images

On Friday, Google and Apple joined together for an ambitious emergency project, laying out a new protocol for tracking the ongoing coronavirus outbreak. It’s an urgent, complex project, with huge implications for privacy and public health. Similar projects have been successful in Singapore and other countries, but it remains to be seen whether US public health agencies would be able to manage such a project — even with the biggest tech companies in the world lending a hand.

We covered the basic outlines of the project here, but there is a lot more to dig into — starting with the technical documents published by the two companies. They reveal a lot about what Apple and Google are actually trying to do with this sensitive data, and where the project falls short. So we took a dive into those filings and tried to answer the twelve most pressing questions, starting at the absolute beginning:

What does this do?

When someone gets sick with a new disease like this year’s coronavirus, public health workers try to contain the spread by tracking down and quarantining everyone that infected person has been in contact with. This is called contact-tracing, and it’s a crucial tool in containing outbreaks.

The system records points of contact without using location data

Essentially, Apple and Google have built an automated contact-tracing system. It’s different from conventional contact-tracing, and probably most useful when combined with conventional methods. Most importantly, it can operate at a far greater scale than conventional contact tracing, which will be necessary given how far the outbreak has spread in most countries. Because it’s coming from Apple and Google, some of this functionality will also eventually be built in to Android and iPhones at an OS-level. That makes this technical solution potentially available to more than three billion phones around the world — something that would be impossible otherwise.

It’s important to note that what Apple and Google are working on together is a framework and not an app. They’re handling the plumbing and guaranteeing the privacy and security of the system, but leaving the building of the actual apps that use it to others.

How does it work?

In basic terms, this system lets your phone log other phones that have been nearby. As long as this system is running, your phone will periodically blast out a small, unique, and anonymous piece of code, derived from that phone’s unique ID. Other phones in range receive that code and remember it, building up a log of the codes they’ve received and when they received them.

When a person using the system receives a positive diagnosis, they can choose to submit their ID code to a central database. When your phone checks back with that database, it runs a local scan to see whether any of the codes in its log match the IDs in the database. If there’s a match, you get an alert on your phone saying you’ve been exposed.

That’s the simple version, but you can already see how useful this kind of system could be. In essence, it lets you record points of contact (that is, the exact thing contact tracers need) without collecting any precise location data and maintaining only minimal information in the central database.

How do you submit that you’ve been infected?

The released documents are less detailed on this point. It’s assumed in the spec that only legitimate healthcare providers will be able to submit a diagnosis, to ensure only confirmed diagnoses generate alerts. (We don’t want trolls and hypochondriacs flooding the system.) It’s not entirely clear how that will happen, but it seems like a solvable problem, whether it’s managed through the app or some sort of additional authentication before an infection is centrally registered.

How does the phone send out those signals?

The short answer is: Bluetooth. The system is working off the same antennas as your wireless earbuds, although it’s the Bluetooth Low Energy (BLE) version of the spec, which means it won’t drain your battery quite as noticeably. This particular system uses a version of the BLE Beacon system that’s been in use for years, modified to work as a two-way code swap between phones.

The workflow for broadcasting codes over Bluetooth, as displayed in the system’s Bluetooth spec
The workflow for broadcasting codes over Bluetooth, as displayed in the system’s Bluetooth spec

How far does the signal reach?

We don’t really know yet. In theory, BLE can register connections as far as 100 meters away, but it depends a lot on specific hardware settings and it’s easily blocked by walls. Many of the most common uses of BLE — like pairing an AirPods case with your iPhone — have an effective range that’s closer to six inches. Engineers on the project are optimistic that they can tweak the range at the software level through “thresholding” — essentially, discarding lower-strength signals — but since there’s no actual software yet, most of the relevant decisions have yet to be made.

At the same time, we’re not entirely sure what the best range is for this kind of alert. Social distancing rules typically recommend staying six feet away from others in public, but that could easily change as we learn more about how the novel coronavirus spreads. Officials will also be wary of sending out so many alerts that the app becomes useless, which could make the ideal range even smaller.

So it’s an app?

Sort of. In the first part of the project (aimed to be finished by mid-May), the system will be built into official public health apps, which will send out the BLE signals in the background. Those apps will be built by state-level health agencies not tech companies, which means the agencies will be in charge of a lot of important decisions about how to notify users and what to recommend if a person has been exposed.

Eventually, the team hopes to build that functionality directly into the iOS and Android operating systems, similar to a native dashboard or a toggle in the Settings menu. But that will take months, and it will still prompt users to download an official public health app if they need to submit information or receive an alert.

Is this really secure?

Mostly, it seems like the answer is yes. Based on the documents published Friday, it will be pretty hard to work back to any sensitive information based solely on the Bluetooth codes, which means you can run the app in the background without worrying that you’re compiling anything that’s potentially incriminating. The system itself doesn’t personally identify you and doesn’t log your location. Of course, the health apps that use that system will eventually need to know who you are if you are to upload your diagnosis to health officials.

Could hackers use this system to make a big list of everybody who has had the disease?

This would be very difficult, but not impossible. The central database stores all the codes sent out by infected people while they were contagious (that’s what your phone is checking against), and it’s entirely plausible that a bad actor could get those codes. The engineers have done a good job ensuring that you can’t work directly from those codes to a person’s identity, but it’s possible to envision some scenarios in which those protections break down.

A diagram from the cryptography white paper, explaining the three levels of key
A diagram from the cryptography white paper, explaining the three levels of key

To explain why, we have to get a bit more technical. The cryptography spec lays out three levels of keys for this system: a private master key that never leaves your device, a daily tracing key generated from the private key, and then the string of “proximity IDs” that are generated by the daily key. Each of these steps is performed through a cryptographically robust one-way function — so you can generate a proximity key from a daily key, but not the other way around. More importantly, you can see which proximity keys came from a specific daily key, but only if you start with the daily key in hand.

The log on your phone is a list of proximity IDs (the lowest level of key), so they aren’t much good on their own. If you test positive, you share even more, posting the daily keys for every day you were contagious. Because those daily keys are now public, your device can do the math and tell you if any of the proximity IDs in your log came from that daily key; if they did, it generates an alert.

As cryptographer Matt Tait points out, this leads to a meaningful privacy reduction for people who test positive on this system. Once those daily keys are public, you can find out which proximity IDs are associated with a given ID. (Remember, that’s what the app is supposed to do in order to confirm exposure.) While specific applications can limit the information they share and I’m sure everyone will do their best, you’re now outside the hard protections of encryption. It’s possible to imagine a malicious app or Bluetooth sniffing network that collects proximity IDs in advance, connecting them with specific identities and later correlating them to daily keys scraped from the central list. It would be hard to do this and it would be even harder to do it for every single person on the list. Even then, all you would get from the server is the last 14 days worth of codes. (That’s all that’s relevant to contact tracing, so it’s all the central database stores.) But it wouldn’t be flatly impossible, which is usually what you’re going for in cryptography.

To sum it up: it’s hard to absolutely guarantee someone’s anonymity if they share that they’ve tested positive through this system. But in the system’s defense, this is a difficult guarantee to make under any circumstances. Under social distancing, we’re all limiting our personal contacts, so if you learn you were exposed on a particular day, the list of potential vectors will already be fairly short. Add in the quarantine and sometimes hospitalization that come with a COVID-19 diagnosis, and it’s very difficult to keep medical privacy completely intact while still warning people who may have been exposed. In some ways, that tradeoff is inherent to contact tracing. Tech systems can only mitigate it.

Plus, the best method of contact tracing we have right now involves humans interviewing you and asking who you’ve been in contact with. It’s basically impossible to build a completely anonymous contact tracing system.

Could Google, Apple, or a hacker use it to figure out where I’ve been?

Only under very specific circumstances. If someone is collecting your proximity IDs and you test positive and decide to share your diagnosis and they perform the whole rigamarole described above, they could potentially use it to link you to a specific location where your proximity IDs had been spotted in the wild.

But it’s important to note that neither Apple nor Google are sharing information that could directly place you on a map. Google has a lot of that information and the company has shared it at an aggregated level, but it’s not a part of this system. Google and Apple may know where you are already, but they’re not connecting that information to this dataset. So while an attacker might be able to work back to that information, they would still end up knowing less than most of the apps on your phone.

Could someone use this to figure out who I’ve been in contact with?

This would be significantly more difficult. As mentioned above, your phone is keeping a log of all the proximity IDs it receives, but the spec makes clear that the log should never leave your phone. As long as your specific log stays on your specific device, it’s protected by the same device encryption that protects your texts and emails.

Even if a bad actor stole your phone and managed to break through that security, all they would have are the codes you received, and it would be very difficult to figure out who those keys originally came from. Without a daily key to work from, they would have no clear way to correlate one proximity ID to another, so it would be difficult to distinguish a single actor in the mess of Bluetooth trackers, much less figure out who was meeting with who. And crucially, the robust cryptography makes it impossible to directly derive the associated daily key or the associated personal ID number.

What if I don’t want my phone to do this?

Don’t install the app, and when the operating systems update over the summer, just leave the “contact tracing” setting toggled off. Apple and Google insist that participation is voluntary, and unless you take proactive steps to participate in contact tracing, you should be able to use your phone without getting involved at all.

Is this just a surveillance system in disguise?

This is a tricky question. In a sense, contact tracing is surveillance. Public health work is full of medical surveillance, simply because it’s the only way to find infected people who aren’t sick enough to go to a doctor. The hope is that, given the catastrophic damage already done by the pandemic, people will be willing to accept this level of surveillance as a temporary measure to stem further spread of the virus.

A better question is whether this system is conducting surveillance in a fair or helpful way. It matters a lot that the system is voluntary, and it matters a lot that it doesn’t share any more data than it needs to. Still, all we have right now is the protocol, and it remains to be seen whether governments will try to implement this idea in a more invasive or overbearing way.

As the protocol gets implemented in specific apps, there will be a lot of important decisions about how it gets used, and how much data gets collected outside of it. Governments will be making those decisions, and they may make them badly — or worse, they may not make them at all. So even if you’re excited about what Apple and Google have laid out here, they can only throw the ball — and there’s a lot riding on what governments do after they catch it.