The breach that killed Google+ wasn’t a breach at all

Illustration by Alex Castro / The Verge

For months, Google has been trying to stay out of the way of the growing tech backlash, but yesterday, the dam finally broke with news of a bug in the rarely used Google+ network that exposed private information for as many as 500,000 users. Google found and fixed the bug back in March, around the same time the Cambridge Analytica story was heating up in earnest. But with the news breaking now, the damage is already spreading. The consumer version of Google+ is shutting down, German privacy regulators in Germany and the US are already looking into possible legal action, and former SEC officials are publicly speculating about what Google may have done wrong.

The vulnerability itself seems to have been relatively small in scope. The heart of the problem was a specific developer API that could be used to see non-public information. But crucially, there’s no evidence that it actually was used to see private data, and given the thin user base, it’s not clear how much non-public data there really was to see. The API was theoretically accessible to anyone who asked, but only 432 people actually applied for access (again, it’s Google+), so it’s plausible that none of them ever thought of using it this way.

The bigger problem for Google isn’t the crime, but the cover-up. The vulnerability was fixed in March, but Google didn’t come clean until seven months later when The Wall Street Journal got hold of some of the memos discussing the bug. The company seems to know it messed up — why else nuke an entire social network off the map? — but there’s real confusion about exactly what went wrong and when, a confusion that plays into deeper issues in how tech deals with this kind of privacy slip.

Part of the disconnect comes from the fact that, legally, Google is in the clear. There are lots of laws about reporting breaches — primarily the GDPR but also a string of state-level bills — but by that standard, what happened to Google+ wasn’t technically a breach. Those laws are concerned with unauthorized access to user information, codifying the basic idea that if someone steals your credit card or phone number, you have a right to know about it. But Google just found that data was available to developers, not that any data was actually taken. With no clear data stolen, Google had no legal reporting requirements. As far as the lawyers were concerned, it wasn’t a breach, and quietly fixing the problem was good enough.

There is a real case against disclosing this kind of bug, although it’s not quite as convincing in retrospect. All systems have vulnerabilities, so the only good security strategy is to be constantly finding and fixing them. As a result, the most secure software will be the one that’s discovering and patching the most bugs, even if that might seem counterintuitive from the outside. Requiring companies to publicly report each bug could be a perverse incentive, punishing the products that do the most to protect their users.

(Of course, Google has been abruptly disclosing other companies’ bugs for years under Project Zero, which is part of why critics are so eager to jump on the apparent hypocrisy. But the Project Zero crew would tell you that third-party reporting is a completely different dance, with disclosure typically used as an incentive for patching and as a reward for white-hat bug-hunters looking to build their reputation.)

That logic makes more sense for software bugs than social networks and privacy issues, but it’s accepted wisdom in the cybersecurity world, and it’s not a stretch to say it guided Google’s thinking in trying to keep this bug under wraps.

But after Facebook’s painful fall from grace, the legal and the cybersecurity arguments seem almost beside the point. The contract between tech companies and their users feels more fragile than ever, and stories like this one stretch it even thinner. The concern is less about a breach of information than a breach of trust. Something went wrong, and Google didn’t tell anyone. Absent the Journal reporting, it’s not clear it ever would have. It’s hard to avoid the uncomfortable, unanswerable question: what else isn’t it telling us?

It’s too early to say whether Google will face a real backlash for this. If anything, the small number of affected users and relative unimportance of Google+ suggests it won’t. But even if this vulnerability was minor, failures like this pose a real threat to users and a real danger to the companies they trust. The confusion about what to call it — a bug, a breach, a vulnerability — covers up a deeper confusion about what companies actually owe their users when a privacy failure is meaningful and how much control we really have. These are crucial questions for this era of tech, and if the last few days are any indication, they’re questions the industry is still figuring out.


It’s interesting that, in this space, Apple has taken a deeply moral position of safeguarding user privacy even when it is arguably compromising their capability to keep up with the competition (Siri, Maps, computational photography). I expect they’re playing a long game to win public confidence, but I find myself increasingly sympathetic to their approach.

Guessing they know its something (Google, Facebook, Amazon) can’t match them on and will just mean increasing sales as time goes on.

The day Apple stops hindering investigations like the San Bernadino shootings and at the same time dedicating a data center to the People’s Liberation Army, your comment will make sense. Till then, it is just that, another instance of the uber marketeer getting away with murder.

With this cover up though, Google, which has so far taken a lot more principled stance (like withdrawing from China in 2010, thereby losing billions upon billions), joins the ranks of duplicitous companies like Apple.

holy naivety batman. These are publicly traded companies.

Apple has always been at its core a seller of devices. When the surveillance capitalism model promulgated by Google, Facebook, Twitter, et al, came into vogue Apple had to decide if it wanted to play in that space. We don’t know what the discussion was, but we do know that Apple did what it has mostly always done – stick to its knitting and be internally focused on what it does best.

In the last few years it has become very clear to all but the Google-Fans and Facebook-Fans that surveillance capitalism has serious problems and real-world negative repercussions. Also during this period governments and intelligence agencies realized the treasure trove of info these companies developed. Probably during that time Apple hardened its position in favor of its users (which is the company’s nature), and decided to limit as much as possible the info it gathered while simultaneously making its devices as secure as possible.

This has allowed the company to differentiate itself from Android and Google products in general. And I do suspect that for many within Apple this is also a ethical decision (note: not a moral one). And it is a long game they are playing.

This is a big part of what pulled my family into the Apple ecosystem – our ethics are in alignment. We find surveillance capitalism abhorrent and want nothing to do with it.

Thank you for this article, it is the most level headed I have seen about the situation. This is categorically not a data breach. This is the fixings of a minor security issue that by all measure was never exploited. This sort of thing happens all the time in every company that builds technology and it isn’t disclosed. It isn’t expected to be disclosed.

The narrative, that google is a villian, has been well fostered by PR agencies at competing companies. The reaction to something like this shows the power of that campaign. Propoganda works and it’s everywhere.

At least it’s closer to an actual breach than the "Google allows you to use third party email clients, and we use a lot of scare quotes when saying it" stories. =_=

The problem is not the bug, or security breach, whatever you prefer calling it. The bigger problem is that Google has been repeatedly getting caught lying to its users and the lawmakers. Remember, this is one company that needs our trust to grow, which also abandoned its "don’t be evil" motto. As someone who still has a lot of friends working there, I don’t trust them a bit.

not telling != lying

Tell that to people who admonish government regulation or lawmaker intervention.

Cambridge Analytica has been a similar issue. Hell, Facebook’s issue wasn’t even a bug, only people blindly giving personal data through Facebook login.
The difference here is that Facebook has a billion users, while Google+… well, you know.

Casting this issue as question of whether or not Google is a "villain" misses the real issue.

In this instance Google made a mistake – it would have been wiser to get ahead of this (potential?) breach by announcing it first. Instead it looks like the cover up it was and makes the company look like a hypocrite given its love of announcing other company’s vulnerabilities.

Don’t be evil. About that.

But the deeper issue is that this is never going to stop for Google. Its entire business model is based on gathering and exploiting the very private and personal data of the world’s population, and to the extent that both state- and non-state actors, working for power or profit, will continue to want to use and exploit that data, Google will continue to have problems.

And some of those problems, like the 2016 US elections, will be very serious indeed.

makes the company look like a hypocrite given its love of announcing other company’s vulnerabilities.


But the deeper issue is that this is never going to stop for Google. Its entire business model is based on gathering and exploiting the very private and personal data of the world’s population, and to the extent that both state- and non-state actors, working for power or profit, will continue to want to use and exploit that data, Google will continue to have problems.

I think this is an existential issue for all consumer data businesses. Google is not even the worst of these. Companies like Experian and Equifax are infinitely worse because they take very private info, as non-consensual 3rd parties, and their data affects critical life decisions. What I give Google is not going to prevent me from getting a mortgage or buy a car in the USA, but these credit bureaus not only have very powerful influences over my life, they make money with my data, and gets away with basically a slap on the wrist when criminals obtain all its user data as a result of their consistently bad data security practices. As insult to injury, they charge us money for trying to mitigate this data theft.

What the hell.

I agree with you, and those companies have been doing this for 100 years.

It’s worth noting that Google also doesn’t ask for people’s consent. Sure, if you actually use a product like Gmail you’re signing a TOS, but Google (and Facebook,et al) also track people’s data through browsers without their consent.

Not exactly the same—Facebook actually builds shadow profiles. There’s no record that Google does the same, even if it does collect data of people on sites using their tracking tools. The difference is that Google’s statistics are aggregation and are not used to infer non-user individuals as profiles, where as Facebook does.

Interesting take, considering yesterday’s Verge article called it a breach.

I’m actually surprised Google said anything about this. I’d expect similar bugs have arisen at other companies, which were merely fixed and moved on.

It is interesting. I wonder if they will do a retraction for that. They should. They are needlessly slandering a company for fixing a security issue.

Actual…did they call it a breach… Unsure and my edit time is running out!

Yes and the headline is idiotic. Verge should never have called it a breach in the first place. it was a bug.

A bit stunned that Google closed the hole months ago (1st time I heard that) but then shut the network off without warning when this became public.

All the dumb folks who connected via groups on there (supposedly somewhat popular with scientists) just woke up to find that disappeared when Google could have spun it down alot slower so it’d be nice for the few users that used it. Obviously the same execs who thought working with the Chinese govt was a good move (tossing away that "good" PR) were involved in this decision as well.

Wut? Google+ is still up. It’s not going to disappear (for consumers) until August of next year.

I’m not sure that I agree with this.

If I understand clearly, it seems that an API that any developper or hacker could use in a malicious software could provide access to a lot of private data from users of Google+.

Even though nothing has been stolen because nobody use G+, it’s still a security issue, no matter how many few are concerned.

Google is so quick to launch security alert about his competitors. So yeah, it’s totally hypocrite.

Maybe it’s hypocritical or best IT practice, what’s for sure is that the court of public opinion is dumb.

9 years ago former Google CEO Eric Schmidt said

If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.
View All Comments
Back to top ↑