Apple's SSL bug first reared its head on Friday, when a mysterious, urgent update began pouring out to iOS devices. From there, the news just got worse. It wasn't just an iOS bug, but a problem in Apple's Secure Transport platform, present in OS X 10.9 for desktop and reaching back to iOS 6 on mobile. As of press time, it's still unpatched on the Mac, although company reps say they are aware of the issue and "already have a software fix that will be released very soon." In a quote that was repeated over and over this weekend, Johns Hopkins cryptographer Matthew Green tweeted about the vulnerability, "It's seriously exploitable and not yet under control." So how bad is it, really?

"Seriously exploitable and not yet under control."

Though a fix has been issued for mobile devices, it's still a very big and very bad issue for Apple. We don’t know how many devices have received the update, although iOS users tend to update quickly — but beyond mobile, desktops running Mavericks are still completely exposed and waiting for an update. The core of the exploit targets your SSL connection, the encryption behind the little padlock in your browser window you see when visiting webmail or banking sites. The browser knows you’re really talking to the bank because it’s verified the site’s SSL certificate, a kind of proof of identity. But the failure in Apple’s code means Secure Transport isn’t checking the certificates properly, and anyone who wanted could masquerade as your banking site, your email, or worse.

The vulnerability includes FaceTime, Mail, and Calendar, some of the core elements of the Mac ecosystem

It starts with Safari, but it doesn’t stop there. According to researcher Ashkan Soltani, the vulnerability extends to every application built on Apple's SSL library, including FaceTime, Mail, and Calendar. They're some of the core elements of the Mac ecosystem. Not only are those apps exposed right now, similar apps have been exposed on iOS since September of 2012, when iOS 6 was introduced. Soltani describes the exploit as "one of the most significant security vulnerabilities from a major company we've seen in a while," if only because of the sheer volume of users exposed.

"This is a more targeted attack ... a fairly clumsy one."

Luckily, the exploit itself may not be as bad as the scale might lead you to believe. To take advantage of the disabled SSL connection, an attacker would probably have to be within Wi-Fi distance, which limits the fallout significantly. As Columbia cryptographer Steve Bellovin tells The Verge, "Man-in-the-middle attacks aren't that easy to launch, and they don't scale well." It's possible to use the bug to tap in farther upstream, reaching more users at a higher level, but it's extremely difficult to perform consistently. As a result, it seems unlikely that this could have been used for the NSA’s bulk collection programs, as some have suggested. The reported exploits so far have been direct attacks — essentially a black hat hacker sitting in a public coffee shop and sniffing the Wi-Fi signal of the dozen people around him. But even in that case, the attacker would probably do better going after Flash or any of a handful of applications that are known to be buggy and security averse. That's no reason to dismiss the bug — update as soon as possible, and maybe stick to private Wi-Fi for the next few days — but it's unlikely this was particularly useful for NSA bulk collection. "This is a more targeted attack," Bellovin says, "a fairly clumsy one if done deliberately."

Even if there wasn't foul play, why didn't Apple catch the bug?

The bigger question is how a bug this bad made it through Apple's security features in the first place. Already, some observers are pointing to aspects of the offending code that seem set up to fail. Cryptocat's Nadim Kobeissi points to an inefficient if-then clause, alongside the pointlessly repeated "goto fail" line, as sloppy coding that makes the error especially hard to spot. At the same time, Bellovin says code coverage testing should have been able to catch the repeated line. Even if there wasn't foul play, why didn't Apple catch the bug?

The simpler, less satisfying answer has more to do with the quirks of software development at Apple's massive scale. One insider describes the OS X security framework as a company-wide kitchen sink, an old framework that's been adapted over and over again across different regimes and different products. New code means new bugs that need to be checked, so large portions of the core apps like Secure Transport can go untouched for huge stretches of time. What seems like an obvious fix for a one-man programming team is much more difficult when there are hundreds of coders involved. But even that doesn't answer the question entirely. Secure Transport was made open source with the Mavericks release, which means anyone inside or outside Apple should have been able to track down the faulty code. Some researchers complain about trouble reporting bugs to Apple, but surely something this serious would have warrented someone’s attention. In the end, it was a Google HTTPS engineer who pulled back the curtain, adding insult to injury.

Apple may not have been actively encouraging anyone to audit the code, but it also wasn’t stopping anyone. It's just that no one ever had a reason to look. Out of all the million lines of code we run every day, this one happened to be printed twice, leading to a cascading failure through millions of machines. For years, no one even noticed. As ecosystems get larger and more powerful, these failures are possible on an unprecedented scale. This time it happened to Apple, but it’s easy to imagine a similar bug slipping in at Google or Microsoft, and security engineers would have to hope that their auditing and reporting systems are good enough to catch it. After this weekend, they’ve seen what happens when they fail.

Update: On Tuesday, Apple released an update to OS X Mavericks and Mountain Lion to resolve the SSL vulnerability.