All things Apple0 posts
All things Apple0 posts
The problem with biometrics is that they are unfair to the user. Once you are compromised, you’re compromised for life (for any system that uses the same input, “iris scanners”). With other methods, both the system and the user can be made trusted again.
Put aside the movies and the cool factor (it is cool, no doubt) for a second and think about how this plays out. The input to biometric systems is converted into an electrical signal and it’s based something you can not change or replace. On its own, it’s not more or less secure than other authentication factors when used properly. It can be convenient for both the user and the group that employs it.
Let’s start with the “attack surface” when you are not using the system. A scan needs some sort of comparison to know your biometric data from just anyones biometric data, this must be stored somewhere. In using biometric authentication, every group that you share this data with must store, transmit, and use it securely. Like password based systems, some do this well, some do not.
The “attack surface” while using the system includes the capture device and the more valuable source input. Each device will process the source input, however briefly, so they require a degree of trust. The method of input into these trusted devices has a degree of trust as well. The major questions are; can data be supplied to a compromised device or can a trusted device be fooled under just the right conditions? Both secure and unsecure applications of this technology share the same key.
Now let’s break it down. Biometric data for some users compromised during storage, transmission, or usage; remove the profiles for those users, the system is secure and users are on their own. Device is compromised; remove the device from the set of trusted devices, remove the users that have been processed by that device, system is secure again and those users are on their own. User has been compromised outside of the system (someone has their biometric data) and there is a risk of it being used as input into the system; remove the user, system is secure and they are on their own.
The “unfair part” is not having the ability to limit the duration of compromise for the user or a practical way to make them trusted again. If changing biometric input for users ever became commonplace then, this system would not even be considered as a serious authentication scheme or countermeasure. There’s always a degree of trust at play, a plan to restore trust in the event of a compromise, and varying degrees of impact.
So bring it up a level, could you work in an environment that widely employs this technology if it was known that your biometric data has been compromised? Could you ever stop someone who has your biometric data from finding some system, somewhere, that could be compromised with it? How much can you in turn trust a system once your biometric data has been compromised? (i.e.. the lock to the front door of my future home will only open to me and the people that have my biometric data.) Do you have a way of limiting known or unknown compromises by resetting things on a regular basis?
As a well qualified practitioner in this field, I feel like there are still some questions to be answered. I care about the technology, its application, and its greater effect in that order. In perspective, that means that it takes a bit for the greater effect to get me to sit down and write a long post that few will read.
19 days ago on We know who you are: the scary new technology of iris scanners 1 reply 1 recommend