The iPhone X’s notch is basically a Kinect

Sometimes it's hard to tell exactly how fast technology is moving. "We put a man on the moon using the computing power of a handheld calculator," as Richard Hendricks reminds us in Silicon Valley. In 2017, I use my pocket supercomputer of a phone to tweet with brands.

But Apple's iPhone X provides a nice little illustration of how sensor and processing technology has evolved in the past decade. In June 2009, Microsoft unveiled this:

In September 2017, Apple put all that tech in this:

Well, minus the tilt motor.

Microsoft's original Kinect hardware was powered by a little-known Israeli company called PrimeSense. PrimeSense pioneered the technology of projecting a grid of infrared dots onto a scene, then detecting them with an IR camera and acsertaining depth information through a special processing chip.

The output of the Kinect was a 320 x 240 depth map with 2,048 levels of sensitivity (distinct depths), based on the 30,000-ish laser dots the IR projector blasted onto the scene in a proprietary speckle pattern.

In its day, the Kinect was the fastest selling consumer electronics device of all time, while it was also widely regarded as a flop for gaming. But the revolutionary depth-sensing tech ended up being a huge boost for robotics and machine vision.

In 2013, Apple bought PrimeSense. Depth cameras continued to evolve: Kinect 2.0 for the Xbox One replaced PrimeSense technology with Microsoft's own tech and had much higher accuracy and resolution. It could recognize faces and even detect a player's heart rate. Meanwhile, Intel also built its own depth sensor, Intel RealSense, and in 2015 worked with Microsoft to power Windows Hello. In 2016, Lenovo launched the Phab 2 Pro, the first phone to carry Google's Tango technology for augmented reality and machine vision, which is also based on infrared depth detection.

And now, in late 2017, Apple is going to sell a phone with a front-facing depth camera. Unlike the original Kinect, which was built to track motion in a whole living room, the sensor is primarily designed for scanning faces and powers Apple’s Face ID feature. Apple’s “TrueDepth” camera blasts “more than 30,000 invisible dots” and can create incredibly detailed scans of a human face. In fact, while Apple’s Animoji feature is impressive, the developer API behind it is even wilder: Apple generates, in real time, a full animated 3D mesh of your face, while also approximating your face’s lighting conditions to improve the realism of AR applications.

PrimeSense was never solely responsible for the technology in Microsoft’s Kinect — as evidenced by the huge improvements Microsoft made to Kinect 2.0 on its own — and it’s also obvious that Apple is doing plenty of new software and processing work on top of this hardware. But the basic idea of the Kinect is unchanged. And now it’s in a tiny notch on the front of a $999 iPhone.

Comments

Straight up bonkers to allow any device to know your face. Not only is the security safeguard an illusion….it’s just weird. Really, really….really weird.

But here’s what’s going to happen. Your phone will be hacked, your face will be stolen, and then used to mask the face of a criminal. And you, you’ll be forced having to prove that it wasn’t actually you using your face. All so you can have the weird AF gimmick of your phone knowing your face.

Right, just like everyone’s fingerprint was hacked and stolen!!!! Oh wait, that never happened.

Apple is storing your facial recognition data the same way it securely stored fingerprint data. Jeez, please stop with these crazy theories. Yours is almost as bad as the "robbers will cut off my finger to unlock my iPhone!"

Wasn’t hacked, just now on a file somewhere, so when the enslavement of humanity (CHIP) is to happen, your fingerprint represents a number; i.e. census. lien.b doesn’t show up for the implant show, it’ll be known you aren’t there, death certificates checked against fingerprints, you’re not dead then you’re hiding.

Forgive me. I’m a fiction writer.

It’s stored locally. It isn’t on file anywhere. In fact when you get a new phone there’s no way to get the information to carry over, it’s siloed to that phone. I imagine Face ID will be the exact same way.

They actually aren’t even storing your fingerprint or the face. It’s only storing a series of hashes resulting from the scan that they compare. I’m not sure how useful that data is, but you can’t reconstruct a fingerprint or face from it.

I’m not sure how useful that data is, but you can’t reconstruct a fingerprint or face from it.

You’re approaching it the wrong way: you don’t necessarily need to reconstruct a fingerprint or face from it, just something that produces a series of hashes close enough to gain access or whatever effect is desired. For example, if I could create a QR code that produced the same series of hashes as your fingerprint, I don’t need your fingerprint. The danger isn’t necessarily just someone with a recreation of your face detailed or sophisticated enough to facial recognition but also someone with something sufficiently close enough to a recreation of what the computer recognizes as your face. Same thing with machine learning, it doesn’t need to produce data that a human being can look at or read and understand completely but something that the computer can understand and act on appropriately.

Good examples of this are Google Translate being able to translate from language X into an intermediary machine language it created on its own and then outputting into language Y for humans. It doesn’t matter what that intermediary is as long as X and Y are both recognizable to the humans being served, and that’s where some of the biggest jumps in computing will occur (when we stop trying to make things that resemble human processes and allow things that are more efficient).
Part of the danger then is that if you can insert something into the process that can convince the computer to translate it as Y, it doesn’t matter whether or not it was X because we’re working at a level of abstraction.

While this may seem silly and unrealistic, it’s important to keep in mind that this is how vulnerabilities occur and where exploits happen. Researchers have actually managed to create a set of glasses that can trick facial recognition software into thinking you’re someone else, as The Verge has written about in the past. While trying to say that person B is Milla Jovavich might not pass human review, you suddenly have a situation where a not especially good mask can produce stronger false-positive forensics with the weight of very advanced computer systems and human judgement behind them.

Likewise, as a more general example of the point about exploits, the current Bluetooth attack Blueborne doesn’t involve taking out your phone, unlocking it, and accepting a Bluetooth connection that can then deliver the malware (though that remains a valid attack if an attacker can do it) but lies in convincing the phone that it already has the permission it requires. Of course, in that sense, the series of hashes or the fingerprint reader itself aren’t necessarily inherently of value so much as just potential vectors for attack which can be combined with others, and so on.

You’re talking about hash collisions which are extremely unlikely but still within the realm of possibility if you can find a weakness in the algorithm. However, that would only allow you to have access to the device itself. You can’t take a particular facial hash and reverse it back to the original data. There is no risk of your face being stolen and made into a mask.

Furthermore, I would suspect that "face hashes" would be unique per device, possibly seeded by a unique device id. For instance, if I had two phones and generate two "face hashes" from my face, then those would be unique and could not be reproduced.

This is correct. Each Secure Enclave on Apple’s A series processors produces it’s own unique seed. It works the same way for Windows Hello as well with TPM chips.

From an interview with Craig Federighi:

When it comes to customers — users — Apple gathers absolutely nothing itself. Federighi was very explicit on this point.

"We do not gather customer data when you enroll in Face ID, it stays on your device, we do not send it to the cloud for training data," he notes.

There is an adaptive feature of Face ID that allows it to continue to recognize your changing face as you change hair styles, grow a beard or have plastic surgery. This adaptation is done completely on device by applying re-training and deep learning in the redesigned Secure Enclave. None of that training or re-training is done in Apple’s cloud. And Apple has stated that it will not give access to that data to anyone, for any price.

still, cannot believe how many , cannot understand how much apple goes the distance for security, adding a second dual core cpu only for that. While android is only based on getting as much info on it’s users. We live in a crazy world, where I ’ve got nothing to hide is so idiot and uneducated . this baffles me.

Tinfoil hat comments aside, are you writing a book about a dystopian AI-run future? I’d honestly be interested in reading it.

What I’m interested on is not that the Security Enclave saves your face as a hash, but rather how the API’s on the OS show this information. Can your face be reconstructed from this? Is a buch of aproximated dots without features? How much access the OS has to the sensors? Because that’s the underlying problem, that an app could be rolled, that innocently looking, can recognize faces at the user end.

You would get access to the facial data via the API since that’s how Animoji works and that’s how the Snapchat stage demo worked as well. This is one aspect in which Face ID is different from Touch ID, where Touch ID had no consumer facing API (or use case) for your finger print (other than authentication which basically returns just a result of the operation, not the data) while Face ID will let you access the depth sensor data probably to be used in apps.

Animoji has nothing to do with FaceId. Once your device is unlocked anyone can use animoji. It doesn’t compare to the hash

I think you completely missed the point.

Seems like there is some in-between: the API will not give access to the "facial print" itself, nor will developers have access to the raw depth sensor data.

From TechCrunch’s interview with Craig:

Developers do not have access to raw sensor data from the Face ID array. Instead, they’re given a depth map they can use for applications like the Snap face filters shown onstage. This can also be used in ARKit applications.

There is no info to show. That data is inaccessible. The phone sends a hash to secure enclave, compares it to the known hash, and returns true or false to let you in. The stored hash has no way of leaving it’s storage area.

While you’re right that the biometric hashes do not ever leave the secure enclave, that’s not what the commenter above is referring too. Apple has already stated that app developers will be able to leverage True Depth camera information to build out apps using the facial depth maps.

The answer though is that Apple is not allowing access to the raw sensor data but instead to a more limited depth map.

From TechCrunch’s interview with Craig:

Developers do not have access to raw sensor data from the Face ID array. Instead, they’re given a depth map they can use for applications like the Snap face filters shown onstage. This can also be used in ARKit applications.

Yes, but he’s conflating that api with FaceID, the two are not the same. That API gives zero access to any security features, well other than "can I access this", in the same way TouchID did

You have a sub-rudimentary understanding of how this technology works & how the data is stored.

duh… it’s clearly sorcery and witchcraft.

I mean, they did say that it was magic !

We all saw what Arya learned from the Faceless Men. That only took like…. 4 weeks to master and she’s just a child!

We all forgot, youare there for those meetings, not just reading what it given to us like the rest of us. Why aren’t you in the Apple commercials again?

View All Comments
Back to top ↑