中文 English

Making PUFs Even More Secure

New sources of entropy could significantly improve robustness of physically unclonable functions.

popularity

As security has become a must-have in most systems, hardware roots of trust (HRoTs) have started appearing in many chips. Critical to an HRoT is the ability to authenticate and to create keys – ideally from a reliable source that is unviewable and immutable.

“We see hardware roots of trust deployed in two use models — providing a foundation to securely start a system, and enabling a secure enclave for the end user of the SoC,” said Jason Oberg, co-founder and CTO at Tortuga Logic. “Use cases include storing biometric data, customer-programmed encryption/authentication keys, and unique IDs.”

Those keys and IDs are where physically unclonable functions (PUFs) excel. But today, there’s only one PUF technology broadly deployed. New ones are being readied for commercial use, and they leverage new sources of entropy. Even more are in the research stage.

Newly explored physical phenomena are opening up additional ways to leverage entropy. Traditional PUFs rely on manufacturing variations for entropy, while some new ones leverage quantum phenomena. But the word “quantum” in association with a PUF may mean different things, so it can be helpful to understand how these entropy sources work.

IDs and keys
PUFs are used for two principal applications. The oldest is as a source of an ID for authentication. The newer application is for generating keys for secure communication.

“What’s happening today is that the industry uses the hardware root of trust to generate one key, and they’re using that key just as an identity,” said Shahram Mossayebi, co-founder and CEO of Crypto Quantique.

Doing both with a PUF has value, but while similar, these two uses have different characteristics. In particular, for authentication, a few bits of slop in the response might be acceptable, if the ID is simply being interrogated. If it’s being used as a key, whether asymmetric or symmetric, it has to be exactly correct, because any single bit that’s wrong will cause complete failure.

“Good authentication is using a form of public-key (asymmetric) cryptography, so the uniqueness of the authentication keys is equally as important as the key used for encryption,” said Oberg.

Precision matters. “If you’re using a cryptographic key with a single bit error, you get complete garbage out,” observed Mark Davis, president of Crossbar. For that reason, key-generation applications tend to make use of error-correction codes (ECC) to ensure stability.

The importance of hardware roots of trust
HRoTs generally are seen as the best way to anchor security. “Best practice is to tie any of the security or privacy information to a HRoT in the device,” said Mike Borza, security IP architect at Synopsys. “As a designer, you’re being responsible with the information that’s being entrusted to you.”

Many levels of security may be available for different systems. “In some of the embedded applications that we work with, the idea of a simple partitioning or being able to work within a root-of-trust subsystem is sufficient,” noted George Wall, director of product marketing at Cadence.

Perceived lack of security may even be getting in the way of connecting systems to the internet. “We talked to some leaders in industry, and they’re not even connecting their factories to anything because of the fear of cyberattacks,” said Mossayebi.

Steve Hanna, distinguished engineer at Infineon, stressed the importance of this with respect to security in general, and secure domains in particular. “We’ve learned over the years that security is not something you can slap on later. You really have to design it in. You want to have a secure domain where you can keep things like cryptographic keys that need to be protected for the long run,” he said.

This reflects the ongoing desire to provide the strongest practical security. “The challenge is to build the kind of environment that even the hardware security guys would say, ‘I would trust that,’” said Scott Best, technical director of anti-counterfeiting products at Rambus.

Working with entropy
While randomness, known officially as entropy, is clearly a goal in PUF applications, it must be the right kind of randomness. The ideal is that, given two different devices having the same instantiated PUF, they will have different outputs in a way that’s completely unpredictable.

However, for a given device, you don’t want entropy. Ideally, you want the same device to read exactly the same way each time. Given the delicate nature of many of the phenomena being leveraged for PUFs, there tends to be noise associated with any given read of the PUF, so that noise must be removable or correctable. In the end, the goal is to have a consistent, repeatable, decidedly non-random response from a given device.

The challenge for a PUF is to have access to a source of randomness that’s as close to perfect as possible. Nature may provide some good sources, although some say that, even then, “privacy amplification” is needed for “perfect” entropy.

“In nature, no source is perfectly random,” said Pim Tuyls, CEO and founder of Intrinsic ID. “Things might look very random, but for crypto purposes where we need random ‘squared,’ you need very solid randomness. Even if you have an almost perfect source, that’s not good enough. Privacy-amplification randomness extractors turn an almost-perfect source into a perfect source.”

In order to keep repeatability as high as possible, a stable source often must be accompanied by various error correction techniques. That can add milliseconds to the time it takes to read a PUF.

Quantum for entropy?
When it comes to good sources of randomness, it’s tempting to invoke quantum mechanics in the process. In fact, the very term evokes uncertainty and lack of determinism. It starts to sound even more important when used in the context of quantum computers and their ability to crack communications that have been secured using traditional methods.

But the vast majority of PUFs leverage manufacturing variations, not quantum phenomena, for entropy. While these may not be random in the true sense, very slight physical differences get amplified, and enough such differences can provide a sufficiently large entropy field. The benefit here is that one can get a consistent response every time the PUF is read, assuming noise is properly handled

That said, even quantum isn’t perfect. Quantum effects may be impossible to predict precisely, but they come with probabilities. While all quantum outcomes are possible, they’re not all equally probable.

The essence of the quantum argument is that because the element is rooted in quantum mechanics, there is no way for a computer algorithm to calculate what the PUF output of a given device might be. That should make the key uncrackable.

But without access to fab analytics, there’s no way for an algorithm to determine the exact combination of manufacturing variations in a given device either. Quantum doesn’t provide an advantage. In fact, it could make it even harder to get a repeatable response within a device.

To understand how a quantum source might be tamed into repeatability, it helps to understand the notion of PUF enrollment.

Enrollment is a manufacturing step where three things can happen. First, it may be necessary to perform some physical step to activate the source of entropy, like a forming step. Second, it may be necessary to perform characterization of an individual device’s PUF in order to figure out which bits are reliable or to improve the entropy. This is common today in order to establish so-called “helper data” that’s stored with the PUF, which helps with noise elimination.

The third thing that happens at enrollment is the key can be stored in a database in the cloud somewhere so that in the future, when the device authenticates, its legitimacy can be attested.

True quantum sources of entropy can be handled by doing a fresh read of the PUF only at enrollment. The detected response then can be “hardened” or “frozen” so that it reads the right number consistently in all future reads. With this approach, the quantum phenomenon is used only once in the lifetime of the device. The only exception is that it might be possible to “refresh” or “renew” a quantum PUF throughout its life, something that’s not possible when leveraging manufacturing variations.

Examples of entropy sources
While there is at present one dominant PUF mechanism, there are several in the research and commercialization pipeline. A look at some of them can provide examples of where the entropy comes from.

Intrinsic ID uses the SRAM power-up state as a source of entropy. The idea is that each bit can power up as a 1 or a 0, and, given a big enough array, you get pretty good entropy. The mechanism that drives this entropy is manufacturing variation, because it’s a race to determine which side of the bistable bit cell will stabilize first, and subtle variations affect which side wins that race.

It’s similar with the butterfly circuit, a newer approach now available on FPGAs. It’s created out of logic rather than a memory cell, but it is also bistable, and manufacturing variations drive the entropy.


Fig. 1: A butterfly circuit as implemented in an FPGA. Source: Intrinsic ID

Tunneling current
A new entrant by Crypto Quantique leverages tunneling current. Tunneling is a quantum phenomenon widely used in memories, including flash and MRAM. The idea is that if an insulating layer is thin enough, there is a chance some electrons will tunnel through the barrier and produce a small current.


Fig. 2: Two transistors in parallel with thin gate oxides that can be used for tunneling. The differential approach uses the current difference as the output, which also helps to protect against DPA side-channel attacks. Source: Crypto Quantique

That current is exponentially sensitive to the thickness of the tunneling barrier, so minute differences in the thickness of the barrier will be amplified into big differences in current. Given a large number of such tunneling paths, this becomes a source of entropy.

This can be confusing, since tunneling is a quantum phenomenon. But the tunneling itself isn’t the source of entropy; it’s the read mechanism. The barrier thickness drives the entropy, and that is driven by manufacturing variation — even though a quantum technique is used to read it.

Crypto Quantique claims the use of tunneling makes the device uncrackable even by quantum computers, because it’s a quantum effect. Still, no technical justification for linking two things that happen to use the word “quantum” has been made available.

This is one of several approaches that use currents to determine values. In this and several others, the currents are read in differential pairs. This makes it harder for hackers to use differential power analysis (DPA), where they try to learn whether minute changes in supply current can be correlated with ones and zeroes in the PUF readout. By doing this differentially, each high current is matched with a low current, smearing out the differences and making DPA very difficult, if not impossible.

ReRAM cells
Another current-reading approach uses resistive RAM (ReRAM) cells as a source of entropy, and there are at least three of these. ReRAMs work by applying a voltage across a thin dielectric to create a filament, turning what was a high-resistance path into a low-resistance path. This process of forming the filament can be leveraged for entropy.

One approach, by Crossbar, looks at filament initiation between a pair of cells. When trying to program two cells together, one cell will start to conduct before the other one does. This difference is the basis of the output (making this approach DPA-resistant).


Fig. 3: A ReRAM cell, illustrating the filament that provides conduction after programming. Initiation of that filament is involved in the entropy. Source: Crossbar

The mechanisms that create the filament arise through the interactions of the atoms in the lattice, making it a possible quantum phenomenon.

“The switching behavior of the ReRAM is a stochastic in nature, because it’s more likely a quantum-mechanical thing,” said Sung-Hyun Jo, CTO of Crossbar. “We are forming an atomic conducting filament to switch the device. That’s involves atomic movement, a quantum or stochastic behavior.” On its own, that would make the mechanism very unrepeatable.

This is handled by reading the PUF during enrollment to get the readout and then hard-programming that result into the ReRAM. Now the memory literally stores the code non-volatilely.

“The good thing about ReRAM is that the on and off resistance differences are huge compared to all other memory technology,” said Jo.

As a result, Davis said, “The error rate is very much lower — enough that you can use it directly as a key.”

Using this approach makes it critical that attackers can’t get at the ReRAM contents through any means. While attacks might seem like a thing done remotely over the network, Hanna reinforced that, “It can be a local attack where the attacker has physical possession of the device. And they are trying to break in and get something out of it, such as your keys.”

Crossbar said it has tried to have someone tear down a die and determine which cells are in which state — and, so far, no one has been successful. “We gave this to a tear-down house to have them try to discern even heavily reinforced ones and zeroes, and they’re unable to do it,” said Davis.

A different research project [1] used a 3D stack of ReRAM cells, using read currents plus leakage currents as the source of entropy. Read currents are small if the cells are all in a high-resistance state, but the combination of that current with the many leakage paths that a 3D layered array provides gives them the entropy they want. These currents are a function of manufacturing variation.

The third ReRAM approach comes from yet another project by the same team [2], and it leverages a different ReRAM phenomenon. It turns out that if you program a ReRAM cell softly, the state won’t stick. It relaxes back into the original state as the nascent filament dissipates. The time it takes to do so is random.

So the team gently starts a timer when a cell flips, and they count until the cell flips back. The measured time becomes the source of entropy. This could well be a quantum effect, because it’s also related to the initiation of filament creation. That would necessitate a need for freezing the value during enrollment.

OTP cells
Yet another research project [3] found an interesting phenomenon in one-time programmable (OTP) cells. OTP cells typically have one of two states: open or shorted (anti-fuse). But the team found another state, which they called “dielectric fuse,” or “Dfuse.” If they programmed using a certain set of conditions, the dielectric would develop a layer of ions across it (not vertically through it). This blocked the effect of the gate over that dielectric.

They found there was a boundary in the programming conditions such that, on one side, they’d get an anti-fuse, and on the other side, they’d get a Dfuse. This was a highly sensitive boundary, and they used it as the source of entropy. They attributed this source to manufacturing variation.

Chaos
Finally, there is yet another completely different approach being spun up for commercialization. It leverages chaos, for lack of a better term, for entropy. It’s like a ring-oscillator PUF on steroids.

Ring oscillators were an original example of a possible PUF, but they haven’t been commercialized. Using it as a baseline example, signals race around the ring causing oscillation at whatever speed the manufacturing variations dictate for that device. Notably, given a long enough ring, the states of the signals stabilize as digital ones and zeroes.

This new approach from Verilock builds a network not with inverters, but with XOR gates connected in what would appear to be a helter-skelter way. When launched, the circuits oscillate or change so quickly that most of the gates remain in the linear regime, never getting a chance to saturate fully to a one or zero. This chaotic behavior can continue indefinitely, never establishing a long-term stable state.


Fig 4: A network that evolves chaotically, never settling into a stable state. The delay line determines when the output is sampled. “ABN” stands for “autonomous Boolean network.” Source: Verilock. Created by Noeloikeau Charlot, adapted from Fig. 1 in “Hybrid Boolean Networks as Physically Unclonable Functions,” Charlot et al.

What differentiates one device from another in this case is manufacturing variation. The challenge, however, is that with such a wildly evolving system, how do you read a state? They found that if they waited a certain amount of time before reading, they could get consistent results.

The starting conditions are critical to how the circuit state evolves. Those conditions can serve as a challenge, with the associated read providing the response. Given a large enough field, this gives an enormous set of challenge/response pairs.

The first usable strong PUF?
Verilock’s approach leads to a discussion of “weak” vs. “strong” PUFs. Weak PUFs have one or maybe a very few responses. To date, all commercial PUFs are weak PUFs.

Strong PUFs have a huge number of responses, and those responses correlate with specific input conditions.

The reason there have been no successful strong PUFs so far is that machine-learning algorithms have identified patterns to predict responses to challenges. So they didn’t make it to production, because there’s no doubt that attackers would try to exploit that weakness.

This is an example of the unavoidable truth that, no matter what you try in securing a system, someone is sure to try to get around it. “As soon as you fix something, somebody else will come up with something that’s crafty and [try it instead],” said Gajinder Panesar, fellow at Siemens EDA.

Verilock’s approach has the hallmarks of a strong PUF. “In a matter of a couple of seconds, you could peel off a million challenge/response pairs during enrollment,” said Jim Northrup, president and CEO of Verilock.

The question, then, is whether it also exhibits patterns in its responses. Verilock says that, so far, its chaotic approach has resisted attempts to learn more than a couple of bits of a response word. “We’ve done machine learning attacks on our PUF, and the fraction of bits that we can get matches quite closely with the difference from perfect entropy,” said Dan Gauthier, co-founder of Verilock.

“Most of the machine learning attacks say, ‘Okay, I’m going to read a million, a billion — however many response bits off of one PUF — and then try to predict response bits to unseen challenges,’” said Andrew Pomerance, chief scientist at Verilock. “But we’ve developed a framework that allows us to consider a more sophisticated adversary — one where someone is going to buy 1,000 or 10,000 devices and use data learned from those other parts to try to crack target one.”

Verilock is planning to release its “attack” to the community to allow others to try to crack the PUF. If the PUF resists those attacks, this could be the first usable strong PUF.


Fig. 5: A summary of different PUF technologies, their sources of entropy, whether that’s driven by process variation or quantum, and the type of PUF. Source: Bryon Moyer/Semiconductor Engineering

PUFs are coming into their own
It has taken PUFs a couple of decades to become commercial successful. With new PUFs coming onto the scene, it may be tempting to think it also will take decades to prove their worth. That may not be the case, however.

For one thing, a big part of that delay was the lack of early demand. Demand is much stronger now, given the need for security in any device that connects to the internet.

Beyond that is the storied skepticism of the security community. New ideas get vetted hard for a long time before everyone agrees that the ideas are as good as they sound. How much vetting is needed depends on what a security device purports to do. New encryption algorithms, for example, face extraordinary scrutiny.

In the case of PUFs, however, if all you’re trying to do is to generate an ID or a key, most of what you’re trying to convince the community of is that you’ve got a good source of entropy. And NIST has a series of tests for confirming that. Crossbar, for example, says it already passed those tests.

Other characteristics matter too, notably circuit size (and cost) and power. As these new devices face a tough crowd, it’s likely some will make it through. But it might not take decades to do so.

— Ed Sperling contributed to this report.

References
[1] “A Machine-Learning-Resistant 3D PUF with 8-layer Stacking Vertical ReRAM and 0.014% Bit Error Rate Using In-Cell Stabilization Scheme for IoT Security Applications,” Yang et al, Zhejiang Lab, Institute of Microelectronics of the Chinese Academy of Sciences, The Number 5 Electronics Research Institute of the Ministry of Industry and Information Technology, Fudan University, IEDM 2020
[2] “A Novel PUF Using Stochastic Short-Term Memory Time of Oxide-Based ReRAM for Embedded Applications,” Yang et al, Zhejiang Lab, Institute of Microelectronics of the Chinese Academy of Sciences, The Number 5 Electronics Research Institute of the Ministry of Industry and Information Technology, Fudan University, IEDM 2020
[3] “A Novel Complementary Architecture of One-time-programmable Memory and Its Applications as Physical Unclonable Function (PUF) and One-time Password,” Wang et al, National Chiao Tung University, National Central University, UMC, IEDM 2020

Related
PUF Knowledge Center
All-In-One Vs. Point Tools For Security
Security is a complex problem, and nothing lasts forever.
Making Sense Of PUFs
What’s driving the resurgence of physically unclonable functions, and why this technology is so confusing.
Why It’s So Difficult — And Costly — To Secure Chips
Threats are growing and widening, but what is considered sufficient can vary greatly by application or by user. Even then, it may not be enough.



Leave a Reply


(Note: This name will be displayed publicly)