Imperfect Silicon, Near-Perfect Security

Physically unclonable functions (PUF) seem tailor-made for IoT security.

popularity

Some chipmakers, under pressure to add security to rapidly growing numbers of IoT devices, have rediscovered a “fingerprinting” technique used primarily as an anti-counterfeiting measure.

Physically unclonable functions (PUFs) are used to assign a unique identification number based on inconsistencies in the speed with which current causes a series of logic gates to open or close. So otherwise identical chips will deliver different results in identical test circuits due to random variation in the speed with which those gates respond to a test, according to a 2007 paper by MIT researcher Srini Devadas, who discovered the pattern and founded the company Verayo to commercialize systems that use it.

These “fingerprints” have been used largely as an anti-counterfeiting measure or to allow authentication of the chip as part of a secure-boot sequence and in FPGAs, ASIUCs, NFC RFIDs and other chips.

More recently, however, chip companies and researchers have been exploring PUF as a way to create unique, undiscoverable, unchangeable identification numbers that can serve as the private key in public/private key networks and become the basis for much more complex encryption designed to secure communication among devices, not just verify security of a single chip.

“PUF has a great advantage in that, when a system gets hacked and the PUF in each device gives it its own ID and gold (private) key, if that device gets hacked, only it fails; it won’t contaminate the other devices [in a system],”, according to Charles Hsu, chairman of eNVM IP provider eMemory Technology, which calls its version of the technology NeoPUF. “With no hardware-based security, once one password fails, the system fails. With PUF or other on-chip security, you get layers of protection that each have to be cracked individually.”


Fig. 1: NeoPUF approach. Source: eMemory

PUFs were first proposed in 2003, but this technology remained in limited use because many of the implementations of it were not reliable. The test circuit that has to be added to the die, while small, has to be stable enough to deliver a consistent response every time. eMemory tests with pairs of capacitors, raising the voltage until one flips or overcharges and starts to leak. The PUF pattern represents a 0 or 1 depending on which capacitor leaked.

Because it depends on the function of elements already in a piece of IC, it is possible to use PUF identifiers on any integrated circuit, without maintaining databases of identification data required to match a chip to an identifying number created using a method not tied to the hardware itself, according to Pim Tuyls, CEO of Intrinsic ID, who launched the company in 2008 based on his research on SRAM-based PUF technology at Phillips Research in the Netherlands.

Some level of variation exists in every chip due to the nature of the silicon, but it’s not always easy to find the right pattern of nodes within the chip to get a good profile, or set up a test process to get a consistent result.

“If your test is not that good of a random source, if there is a mismatch of the circuit or the result strongly depends on the environment and noise, you will not get a consistent result,” said Hsu. “That’s why PUF is not widely used. If there is a good design for the test, the PUF is very reliable.”

With a system able to filter noise and confirm the value of a PUF identifier, however, the approach could become the foundation of any number of multilayered security applications.

“It is important to know what device is logged in to my system, from where and how many times,” Tuyl said. “You can use it to prevent counterfeiting, but it can be used in datacenters and servers to know which devices I can trust and that I’m not delivering services to a rogue device.”

PUFs are one approach to creating certain identification for devices, but even device IDs are just one part of a complex, multifactor security picture that is anything but complete when it comes to the Internet of Things, according to Haydn Povey, founder and CTO of Secure Thingz, an integrator that builds secure infrastructure for IoT devices, microcontrollers and other embedded devices.

“PUF is inherent to the device, so you can use it to generate public/private keys that let you validate the device, but it’s somewhat hard to capture that information inside the device and, in most cases, you need to do some post-processing to make sure what the identifier actually is – and there’s some black magic involved in doing that consistently,” Povey said. “Even so you still need to add tamper resistance and a way to extract the public key and integrate to PKI or X.509 or other elements to create an effective layer of security.”

How many layers are needed isn’t entirely clear.

“So far, most of the attacks we’ve seen with the IoT haven’t been very sophisticated,” said Asaf Ashkenazi, senior director of product management in Rambus‘ Security Division. “Simple security features could have prevented those. But for an IoT product there is a real risk because there is less margin, a weaker CPU, less memory, and the operating system is not as robust. Devices are getting simpler, but security is getting more complicated.”

Ashkenazi noted that 70% of those devices do not use encryption, and many can be accessed from the Internet using the same password.

Learning to de-fuzz
There is always noise in establishing the PUF identifier due to bit flip – a small percentage of circuits that registered a 0 in the current startup, but had registered as 1 before, which usually requires some degree of error correction, according to Richard Newell, senior principal product architect at Microsemi Corp. The company has used PUF IP from Intrinsic-ID in its SoC and FPGA products since at least 2011.

Intrinsic-ID’s Quiddikey is a PUF-based SRAM that generates a crypto key when needed, based on the startup value of is memory cells. Each SRAM cell contains two cross-coupled inverters, each of which are made up of a p- and an n-MOS transistor. The PUF value is determined by the difference in threshold value between the P-MOS transistors. The first of the two to begin conducting once the power comes on determines whether its value will be a 0 or a 1.

Bit flip of between 3% and 5% per startup does leave the PUF result slightly noisy, Newell said. But Intrinsic-ID includes a “defuzzification” step using a proprietary algorithm that converts the slightly-variable PUF identification into a fixed number.

The PUF can be used to encrypt the private key on a device, which can then be stored in flash or may or might be eliminated after use and recreated during the next session. Microsemi uses a hierarchy of keys to help create a structure that allows layered, granular permissions and boundaries, but the cornerstone is an unquestionably unique, unbreakable hardware-based key encrypting all the others, he said.

“One of the advantages is that, when the power is down, the key disappears,” Newell said. The circuits that generate the PUF come on only when they’re needed and only for 100 microseconds. “That kind of deters countermeasures. Then, once you turn it off, there’s no known technology that would tell you which bit is a one or zero.”

Gaining notoriety
Both Newell and Hsu said their PUF implementations are getting more attention, usually from users or other vendors involved in IoT who are looking for security with very low power requirements and very low costs.

Hsu has one large customer interested in PUF for FPGAs used in machine-learning implementations to help encrypt and protect the firmware loaded on the FPGAs to prevent copyright violations.

The warnings and guidelines issued by the U.S. Dept. of Homeland Security in November of 2016 are likely to be as good a driver of IoT-related PUF security as anything else, Hsu said. “They require three things. First, that IoT providers put hardware security into devices. Second, that the IoT device provider has identification and authentication built in. And third, that they require that an IoT device, if it gets hacked, can fail, and fail securely. From that point of view, PUF has a great advantage”

Concerned, yes. Writing a check, no.
PUF is becoming more widely known and more widely accepted, but is still competing – along with every other form of security – as well as with inertia and a general lack of concern. More than 84% of chipmakers responding to a 2017 McKinsey & Co. survey said customers want good security. But only 15% predicted customers would pay a 20% premium for good security, while 40% said customers want prices to stay flat or decline.

This helps explain why even when security is available inside of hardware, end device makers frequently ignore it. And while that may seem insignificant for a connected device for the home, these devices can be massed into botnets that can attack sophisticated sites. The distributed denial of service attack on Dyn in October 2016 was just a taste of how bad this can get. An army of surveillance cameras collectively disrupted access to Amazon, Netflix, Reddit, and a number of other major businesses.

How to overcome this kind of behavior is prompting the creation of some interesting business models. Rambus has begun offering chip-to-cloud security, which is basically remote security as a service. The idea is to install a key, a chip or third-party security engine, so that whenever a device is powered up and connected to the Internet, it is automatically communicates with a server to identify and authenticate the device.

“The problem is there are so many areas of security that nothing can address all of it,” said Rambus’ Ashkenazi. “But starting at the root of trust and adding chip-to-cloud communication is a big piece of this.”


Fig. 2: Device to cloud security concept. Source: Rambus

There are other pieces required, as well, some of which are difficult to even identify. Side channel attacks are well understood points of attack, and the risks increase as complexity increases. But the greatest risk is at the edge, where selling prices are lower and competition and time-to-market pressures are higher. That leads companies to make assumptions that other electronics higher up the food chain will handle security.

“Identity goes beyond using PUFs, but people have traditionally not worried about devices at the edge becaue they think they can protect themselves at the gateway,” Povey said. “But we’ve seen people bounce off light fixtures and get into bank information.” The IoT especially, desperately needs boradbased acceptance of elements to allow devices to authenticate, secure microcontrollers, and cost far less than current add-on options, he said.


Fig. 3: SecureThingz’s firmware over the air. Source: SecureThingz

Most IC does come with some level of security – keys injected into the ROM of smart cards or chips, tokenization of identifying data, private keys encrypted into flash memory, or entirely separate elements added to an IC or chip to provide identification and root-of-trust secure-boot assurance that prevents malware or untrusted applications from booting with the system, according to Mike Demler, senior analyst for The Linley Group and Microprocessor Report.

That, ultimately, is where security has to start, in the datacenter or the IoT.

“PUF is great for identification, but authentication doesn’t necessarily imply encryption, or confidentiality, so you have to know where you’re starting,” Newell said. “I recommend to customers that one of the first things on the list be the ability to boot securely. If you don’t know the code you’re executing is authentic, you’ve already lost.”



Leave a Reply


(Note: This name will be displayed publicly)