Biometrics are convenient and ubiquitous, but they aren’t a substitute for good security.
Biometric security, which spans everything from iris scans to fingerprint sensors, is undergoing the same kind of race against hackers as every other type of sensor.
While most of these systems work well enough to identify a person, there are a number of well-known ways to defeat them. One is simply to apply newer technology to cracking algorithms used inside these devices. Improvements in processing power from one generation to the next, and a proliferation of information about where the vulnerabilities are, applies to biometrics as well as other technologies.
“Hacking, in my opinion, will depend strongly on the technology used and I would like to make a similar analysis compared to crypto keys — the higher the number of bits, the more secure and difficult to hack,” said Kris Myny, team leader for research and development (R&D) at Imec. “A 1,000 ppi (pixels per inch) fingerprint sensor can capture more details compared to a 200 ppi fingerprint sensor, and may be more difficult to hack.”
It’s not just fingerprint security that’s at risk. The inclusion of facial recognition into phones and voice recognition on calls opens up a whole new attack surface that was supposed to provide another layer of security.
“With facial recognition, one of the key challenges is that you can steal the details of someone’s face,” said Gordon Cooper, product marketing manager at Synopsys. “Maybe they’re encrypting as this data goes off to DDR or as data is going in and out, which could include the graph topology, weights, and confidential data. But the question that needs to be asked each time is whether someone hacked it, or whether there is a comfort level that it is secure. Anyone can take a picture of your face, your iris, or your fingerprints and gain access to your bank accounts.”
Co-opting the security system provides a way inside, and a way of utilizing someone else’s identity to gain access to even more data or set up false accounts and steal someone’s identity. But in the past, most of this was done with software. What’s changing is that data that was thought to be unique, such as a fingerprint or various markers on a person’s face, can be stolen and used to impersonate that person.
“The challenge is there is a lot of data, so having a solution embedded into the processor architecture helps with that,” said Cooper.
A different kind of attack surface
Taking a step back, this is a way of using data to gain access to more data. In effect, it operates like a key into the hardware and the algorithms running on that hardware.
“If you look at the biggest classes of security vulnerability today, it’s primarily violations of memory safety,” said Ravi Subramanian, vice president and general manager for IC verification solutions at Mentor, a Siemens Business. That includes everything from how the platform is allowed to communicate, who can access it, how authentication happens, and how data is protected.
Securing the biometric interface is turning into a race to provide more markers and better resolution, and it needs to be approached “very carefully,” Subramanian said.
Others agree. “Biometrics are not a substitute for good security,” said Ben Levine, senior director of product marketing in Rambus’ Security Division. “You still need to secure the overall system” both from a hardware and software perspective, he said.
Regardless of what authentication method is used, whether it’s a simple pin or password or iris scan, “if the [method] isn’t secure then someone can hack it,” said Levine.
He said that execution of security-related code needs to be protected, or reside in some sort of protected enclave. If not, it doesn’t matter what authentication technique is used. “The attacker will just bypass it,” said Levine.
The risk spreads
Initial implementations of biometric security were used in military, law enforcement and banking applications, which used those biometrics to access secure facilities.
That has changed significantly over the past 12 to 18 months. Biometric security is moving into the mainstream at a rapid pace. It is now readily available on smartphones, computers, and it is reaching further into other applications. Some banks and brokerage houses now use voice recognition for identification, while others allow users to validate their identity for bank applications using a sensor that reads their fingerprint or face on their smartphones.
“The amount of R&D going into [biometric] areas today is unprecedented,” said Mentor’s Subramanian. A decade ago, R&D in this area was basically a research project.
Since then, the quality of the algorithms used for voice and facial recognition have improved significantly, and so have the inferencing capabilities that allow that data to be interpreted. But in many cases, the effort has been on getting the technology to work rather than safeguarding it from outside attacks.
The recent biometric data breach by a South Korean security firm is a reminder that fingerprinting and facial recognition data need to be protected just like any other sensitive data.
In August, research firm VPNMentor announced it had discovered a major security breach by web-based biometric security platform BioStar 2, a system built by South Korea-based Suprema, one of the world’s top security firms. Without divulging details on how they infiltrated the system, VPNMentor said their team was able to access more than 1 million fingerprint records, as well as facial recognition data, and made extensive efforts to contact Suprema to have the breach fixed.
“Once stolen, fingerprint and facial recognition information cannot be retrieved,” their report said in bold letters. Besides BioStar 2’s biometric data, the research firm was able to access information including usernames, passwords, user identification information, employee records including home addresses, emails. In total, 27.8 million records and 23 gigabytes of data were accessible.
Securing the data
This hasn’t gone unnoticed by security companies, which see biometrics as a new security opportunity.
As with all security, the best solution comes in layers that are architected at the initial phases of the design. That begins with security-related code in a separate security enclave in a hardware with a root of trust that is isolated and designed specifically with hardware protections, said Rambus’ Levine. He noted that companies are moving away from a model where security is in a separate external device, like a smart card chip or a trusted platform module, and baking it into the chip architecture.
“The biggest trend for root of trust, is to say, ‘Okay, we need a separate enclave,’” Levine said. “We’ll put it on the same die, in the same piece of silicon as the rest of their chip.”
The basic idea here is to compartmentalize the security on a die, which can be monitored for electrical or thermal activity, or shut down if there is any physical tampering. In addition, there is a root of trust for the authentication keys—an idea introduced by Arm a half-decade ago and continually updated since then.
This is the reason Rambus acquired Vermatrix, a French chip security firm, in September for $65 million cash.
Counterfeit biometric chips
There are worries over a different security issue – chips that are fake.
The defense industry has been very concerned about the defense space being breached through counterfeit chips, said Levine.
“It’s been one of DARPA’s (Defense Advanced Research Projects Agency’s) main initiatives for a long time,” he said.
Levine said Mentor has been alleviating those concerns by providing security throughout the supply chain for its semiconductors and insuring their chips are authentic and not clones or counterfeits.
Mentor starts at the first stages of manufacturing by provisioning the keys and identities into the chip, such as serial numbers that can then be traced throughout the life cycle of a device, he said.
There’s more to it, as well. There are concerns about sophisticated security scenarios, where a manufacturer might insert a Trojan that can monitor or control the chip or embed some spyware, hardware or firmware, said Levine.
Tradeoffs in biometric design
In the biometric space, developers are seeing a tradeoff between speed and accuracy.
Recent facial recognition studies by NIST, under the U.S. Department of Commerce, found that slower algorithms by some facial recognition developers improved accuracy. “Taking time to localize the face and extract features would lead to accuracy,” said the NIST report, published in September.
Fig. 1: Facial recognition reference points. Source: National Institute of Standards and Technology/N. Hanacek
Security experts from the chip industry are in line with such findings.
“You need a lot of hardware to have the algorithms go fast but also to have algorithms that deliver accuracy” for biometric and other sensory functions, said Mentor’s Subramanian.
He points out that for many applications, the temporal nature of how fast the algorithms need to react for a facial recognition, for example, may vary widely depending on the situation. A facial recognition scan needed for a bank ATM may need to be faster than recognizing a person at the door for a residence, he said.
The developers of algorithms have been working hard to deliver manufacturer needs, such as enabling drivers to use their biometric data to get into their motor vehicles, or using biometric data next to start their cars. But the technology has other purposes, as well, such as object recognition by assisted and autonomous driving systems in cars or robots, which is why there is so much attention on speed.
“You can’t even wait five seconds to recognize something,” said Subramanian. “That puts a much bigger demand on the computation you need because you need much more parallel computation.”
In fact, much of this technology is being adopted more widely because the technology is becoming usable across a variety of applications.
“The area where biometrics come in have increased, and the reason it has increased is because you can actually do more processing today,” said Johannes Stahl, senior director for product marketing at Synopsys.
Even the way the processing is done in the chips has improved, he said. Traditionally biometric functions were running on a processor, but today these functions have machine learning, said Stahl.
“With machine learning, we will see even more real world interfaces taking over becoming viable,” he said. “Five years ago, there was no way you could do facial recognition in a second, one half a second. Today you can do it.”
This is true in voice recognition, as well. For a long time voice-recognition algorithms were not able to deliver 90% accuracy, and voice recognition stalled out as a way to interface with devices, said Subramanian. But once voice recognition reached 95% accuracy, a few applications that were based on automated phone systems came into play. Then, a few years ago, voice recognition technology reached 99% accuracy. That’s when Apple’s Siri, Amazon’s Alexa and Google Assistant started taking off.
“The core algorithms have been around for a long time, but there’s so much R&D needed to improve those algorithms,” said Subramanian. “It’s not just how fast you do it, but how accurate you do it. So the two metrics people look at are how fast you can recognize something and how accurate you can be.”
A biometric reading may be very quick but actually have a false positive. Both speed and accuracy determine how much hardware is required. But it gets more complicated than that.
There are two areas for the chip industry to develop when it comes to supporting biometric functions, said Subramanian. The industry needs to support those designing the sensory chips for biometric applications and those building security into the chips. “In order for chip makers to design and build and sell those chips they need to be able to verify that the security methods are actually going to work,” he said.
Conclusion
The data being collected to improve both speed and accuracy is being applied to new applications, as well. Developers of algorithms are using that data to identify new patterns and identify relationships between those patterns, while the data itself is being looped back to help improve algorithms used in voice and data recognition, according to Imec’s Myny.
Put in perspective, the algorithms are being sped up by the hardware in biometrics, and that data is being applied in different ways across different markets. But none of this matters unless the biometrics themselves are secure, and at this point there is a lot of work ahead to make that happen.
—Ed Sperling contributed to this report.
Leave a Reply