Physically Unclonable Functions

Using PUFs to secure and protect ICs.


Physically unclonable functions (PUFs) are emerging as a novel way to protect a variety of ICs. In today’s world of cyber threats, vulnerabilities, insecure networks and hardware, and intrusions, it is finding a renewed interest.

The technology on which it is based has been around since the mid-1990s in its present form and the term PUF was cloned in the early 2000s.

PUFs find their ancestry in the disordered systems realm. Disordered systems, also called chaotic systems, rely on the phenomena of small variations in the initial conditions will result in unpredictable variation at the output. The extrapolation of that, when applied to cyber security, is that it would be theoretically impossible to predict what the result will be, so it is impossible to compromise – the piece de resistance of cyber security. However, it is not quite as simple as that, and this article will attempt to peel back some of the layers and reveal what they, realistically, can do.

Chaos theory got its start with Edward Lorenz in the early 1960s. Lorenz, a meteorologist was conducting some algorithms to try to model certain weather conditions. He put in the data, ran the numbers, and got the results. As any good scientist would do, he re-ran the sequence to confirm his work but the two outcomes were drastically different. As he scrutinized his procedures, he noticed that while the input variables were close, they were not exact, 0.506127 vs. 0.506. In one instance, he rounded off the number.

intro image

At that time, such rounding wasn’t considered all that significant (after all, we were still using slip-sticks, where we would be lucky to be accurate more than a couple of digits to the right of the decimal point anyway), and such rounding off should not have affected the outcome so radically. But it did and that got Lorenz to surmise that, perhaps, even the slightest variants in circumstances can cause tremendous variants in outcomes. And that became what is now called the “butterfly effect,” which he coined (see note 1).

What it is
In the security wheelhouse, the term PUF is a function instantiated as a physical structure on a chip. The definition is that its actions are easy to evaluate, but outcomes are difficult to predict. The way it works in chips is quite intriguing. It relies upon the minute variances created by the chip manufacturing process at the die level, which means that each chip can respond, in a random and unique way to the challenge, in the challenge-response scenario.

Chip anomalies occur in layers, blocks, devices and the fabric, and can be related to voltage, timing, resistance, etc. Essentially all of these minute anomalies can be analyzed to create a digital fingerprint of the device. Since no two devices will be identical, that makes each device unique with an identifier that is exclusive to the device. And, the nice thing about PUFs is that just about any chip, that has variables which fluctuate, can be used with PUFs.

The most elegant thing about PUFs is that they are inherently unclonable due to their uniqueness. Even if, for some reason, an identical run on identical lines, with the same design rules could be programmed. The intrinsic differences in materials, minute tolerance variants in processes and the electrical characteristics of the semiconductors and other components still would not produce an exact clone.

Where they can be useful
PUFs can be extremely effective against cyber-attacks and reverse engineering. They can be used for such functions as device authentication, random number generation, anti-counterfeiting/cloning, key generation/management, hw/sw binding, secure/trusted boot, roots of trust and trusty anchoring, as well as providing secure keys.

A good example of an implementation for PUFs is in it being a watchdog for other integrated chip components, even the physical environment for such applications as smart cards (see Figure 1). In this application, the chip is actually capable of monitoring the card body.

Figure 1
Figure 1. Smart card with PUF functionality. Courtesy NXP.

How this is done is when the chip is produced, the critical parameters of the PUF environment is measured and stored as an exclusive parameter within the chip. Once this metric is stored, it can be called up to authenticate the environment as necessary – for example, when the smart card is used, or periodically if no activity is detected for a time. If the environment has changed, such as an attempt to reverse engineer the chip, if would fail the fingerprint check of the PUF.

Another example would be in SRAM applications. In this implementation, the PUF takes a snapshot of the physical characteristics of the memory circuits from the startup behavior of the memory cells. Each device will have unique startup characteristics, due to the minute differences in cell parameters, thereby creating a unique identifier or fingerprint that is now unique to this circuit. This fingerprint can also be used as a key or to protect other segments of the device (memory itself, for example), or as the key, itself.

Under The Hood
Take memory cells, for example. Each transistor has slightly different electrical characteristics (threshold voltages, drive and leakage currents, ramp up/down metrics, etc.). Even though the cell design is symmetrical, minute asymmetry is found among the cells relative to these electrical characteristics. The variations in these parameters provide individual and collective (array) specifications make up the device characterizations, and part of their unique fingerprints.

However, there is a gotcha here. Electrical parameters can vary with a number of environmental factors (temperature, humidity, vibration). Although these effects may be minimal in many cases, they, especially temperature, still need to be considered.

As well, some of the individual semiconductors, or arrays may be ever so slightly less stable than others. This leads to the problem of that slight difference showing up during initialization. That being the case, there is there is a strong probability that a startup may not exactly match the stored fingerprint, in which case the device will consider itself breached and take whatever protective action it is programmed to do.

To eliminate that, error correction is implemented. Typically, Reed-Solomon codes are used to validate the fingerprint. Another reason to implement error correction is the inescapable fact that electronic devices drift over time. Error correction will address that as well.

The physical makeup oF SRAM PUFs
Physically, the PUF is made up of the SRAM as the function source, a measurement circuit that includes state analysis, and error corrections that create the fingerprint and some IP. The IP helps to protect keys and memory. There is also some non-volatile memory (flash, EEPROM) that is used to store additional data such as activation codes or special instructions. A general block diagram of the set up process is shown in Figure 2 on how the PUF data is produced, which then becomes the devices unique fingerprint.

Figure 2
Figure 2. PUF and Error Correction post-processing that generates PUF data. Courtesy NXP.

Security Angles
One of the things that PUF technology does very well is enhance chip memory security. One of the short comings of memory is that it is often used to store the protection. So “protecting the protection” becomes an issue. Encryption keys for the various NVMs are part of this.


Current NVM technologies are next to impossible to leak, but anyone who believes that state will last is sadly mistaken. As malevolent effort increase in capability and complexity, expect that NVMs will be just as vulnerable as VMs, eventually. PUF’s will add another line of defense, making offline memory compromise impossible. PUFs can also be used to protect external memory such as that found in USB sticks, memory cards, and such by serving as a root of trust for memory encryption.

Where to look for it
The renewed interest in PUF technology has targeted several areas that are heavy in smart card use, for example. One of them is electronic ID cards and passports. In these applications, the smart card stores a variety of data. For passports, additional data such as authentication is included. PUFs can add a layer of protection to the smart card’s secure memory and thwart reverse engineering.

Another area is in the mobile device market. Since smart cards are widely used in smartphones and tablets, PUFs are extremely relevant for these devices. Contactless interfaces (typically, near-field communications) on smart devices use the smart card as a secure element to emulate bank cards. In such a case, PUFs can be used to secure the smart card’s vulnerable elements.

There are a number of other applications where PUF technology can augment the security. Areas such as authentication for consumer devices (printer cartridges, for example), and electronic payments.

Overall, PUF technology is a shining star in the expanding universe of security exploitation. New techniques for reverse engineering, key extraction, cloning, and memory leakage are being developed as we speak. And the real threat is not by some ubergeek at MIT, who is working on his doctorate, but by organized groups of cyber-terrorists funded by millions of dollars, employing talented programmers and using state-of-the-art equipment.

PUFs are edge-of-the-envelope security techniques, but they need both market acceptance and market drivers.

Reference 1 – Source: Wikipedia. The butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state. The name of the effect is derived from the metaphorical example of the details of a hurricane (exact time of formation, exact path taken) being influenced by minor perturbations such as the flapping of the wings of a distant butterfly several weeks earlier.

Leave a Reply