Testing Chips For Security

Thinking like a hacker is critical as designs become more heterogeneous and domain-specific.


Supply chains and manufacturing processes are becoming increasingly diverse, making it much harder to validate the security in complex chips. To make matters worse, it can be challenging to justify the time and expense to do so, and there’s little agreement on the ideal metrics and processes involved.

Still, this is particularly important as chip architectures evolve from a single chip developed by one vendor to a collection of chips in a package from multiple vendors. The ability to identify security risks early in the design flow can save time, effort, and money on the back end of the flow. And in theory, this should be the same as any other test or debug process. But hardware quality, reliability, and security have very different track records in terms of testing.

“We’ve done tests for quality for the past 50 years,” said Mark Tehranipoor, chair of the Department of Electrical and Computer Engineering (ECE) at the University of Florida. “We’ve done tests for reliability for the past 25 years. And now we’re talking about tests for security.”

One of the key challenges on the security side is clarifying exactly what you’re testing for. While a chip may have been manufactured to detailed specifications, its security has to be assessed through the mindset of a smart and determined attacker rather than through predictable metrics.

“You have to think about the intelligence of an adversary,” Tehranipoor said, he said. “We can model a defect, but it’s extremely hard to model an intent. That’s where security becomes difficult to test for.”

No matter how many best practices are applied during the design phase, the real world often presents security challenges that are hard to anticipate. “Once you’re out in the field, all bets are off,” said Adam Cron, distinguished architect at Synopsys. “You’re at the whims of any hacker around the world and what his best practices are, and what the new thing is coming down the line.”

Tehranipoor and Cron are two of the authors of a recent paper examining these issues, “Quantifiable Assurance: From IPs to Platforms.” The paper lists more than 20 different metrics for different aspects of security, which points to the complexity of the challenge. “Generally, measuring security is still at a nascent place,” Cron said. “All companies are just getting off the ground from a measurement standpoint for security in particular.”

Finding consensus around those metrics won’t be easy. “Over the past several years, the [hardware security] community has been talking about developing metrics, but we’re not quite there yet, and we’re not going to be there any time soon,” Tehranipoor said. “Why? Because by the time we figure out a good metric for some attacks, a new attack comes in, and we’re behind again.”

Agreeing on metrics
Just about any security metric could be rendered irrelevant by the right attack. A hypothetical device certified for high security could be breached by a highly intelligent adversary who happened to spot something others hadn’t. “If we all missed looking at it from the angle that this particular attacker looked it – we thought it was a really good, secure device, but this guy showed up and he looked at it in a certain way that we all missed – then suddenly, there’s an easy attack,” Tehranipoor said.

Texplained CTO and founder Olivier Thomas said any testing of IC security has to consider three classes of attacks – non-invasive, semi-invasive, and fully invasive. But testing for the latter often falls short. “Testing the first two classes is usually done pretty well, as this does not require too much equipment, resources, and time. But when it comes to the fully invasive class, the evaluations, if conducted, are really far from what pirates or organized groups are capable of,” he said.

Moreover, the wide range of ways any chip can be attacked presents a fundamental challenge. “I have to think about whether this chip is going to leak information through the pin, through the software, through the firmware, through the JTAG, or can it give it to me through power, or EM, or timing, or optical, or laser,” said Tehranipoor. “There are so many ways, and there is no single metric for all of them, because laser is fundamentally different than power, power is fundamentally different than EM. So how many different metrics are we going to have?”

Still, Cron said, there is a real push for a ratings system for consumer products as well as high-security solutions. This is akin to a UL listing for consumer appliances, likely with a date stamp to indicate how up to date a chip’s security is. “You’ll know that, at the time it was checked, you achieved a certain level, and there won’t be infinite levels,” he said. “But if you buy that same product and it’s been sitting on the shelf for two years, you have to ask yourself, ‘Is it still good?’”

A range of approaches
In the meantime, there are several ways to get a sense of a chip’s security. One involves Joint Interpretation Library (JIL) scoring. “JIL scoring tells you how ‘expensive’ it is to initially figure out an attack (identification), and how expensive it is to subsequently do the attack (exploitation),” said Maarten Bron, managing director at Riscure. “This method was initially developed for expressing the security of smart cards (bank cards, public transit cards, SIM cards), and has recently gained wider traction in the domain of MCUs and SoCs.”

Cron noted that NIST has standards for certifying encryption or cryptographic IP, and Synopsys’ RTL Architect can look at differential power analysis. “But there’s still no metric per se,” he said. “Those tools are giving you areas where you should look. But whether or not you do look, or whether or not, while you look, you detect the thing that the hacker is going to be looking at, as well, who’s to say?”

Scott Best, senior director of security products at Rambus, said that while each individual chip manufacturer approaches this with the best of intentions, they all do things differently. “There is no one standard adopted for industrywide practice commercially,” he said. “In U.S. Defense, there are some early-stage guidelines coming together for Microelectronic Quantified Assurance (MQA), for example as part of the RAMP program.”

The diverse range of evaluation methods in use today, according to Texplained’s Thomas, includes analysis by researchers who publish at conferences and online, independent analysis requested by OEMs and integrators seeking more information than that offered by vendors, and Common Criteria security evaluations that only focus on some types of attacks “and therefore are not fully exhaustive.”

But that hasn’t diminished the need for this kind of standardization. “There are two broad key business drivers here,” said Jason Oberg, CTO at Cycuity. “One is obviously standardization. It’s very clear that if you can check a box, someone is more likely to buy. If you sell in a certain market, and if you have to do it, that’s what’s standardization can help drive. The other component is really driven by customer demand for, ‘I want a secure product.’ Or maybe they’ve had that crisis where it actually happened to them. And if you think about defining the systematic process when you have security requirements that are defined up front, part of those security requirements are actually driven by standards.”

Shifting left
This is why there is an increasing amount of focus on security earlier in designs, and testing as early as possible is always better, as with any other portion of the chip design process. It improves efficiency and minimizes cost.

“Catch the security problem earlier and it’s going to cost you 10 times less,” Tehranipoor said. “Go left. Don’t do it post-silicon if you can do it at layout level. Don’t do it at the layout level if you can do it at gate level. Don’t do it at gate level if you can do it at the RTL level.”

Performing security validation pre-silicon allows for far faster remediation of any issues that may come up, and is increasingly becoming an expected part of the process. “At some point, having a simulation-driven pre-silicon security signoff process will become table stakes for makers of security chips,” said Riscure’s Bron. “Put differently, I can see this becoming a competitive disadvantage for those companies that don’t.”

Lang Lin, Ansys principal product manager, noted that testing virtually has other benefits, as well. “In simulation, you don’t have the noisy environment faced by the post-silicon chip,” he said. “You’re living in a digital world, a virtual world, so you can clearly see where the leakage path is from simulation, which might not be that clear in silicon.”

However, it’s crucial to keep in mind that from a security point of view, a design and its implementation are two very different things. “A cryptographic algorithm can be secure on paper (the ‘design’), yet the implementation of it can give rise to side-channel leakage that renders the overall product insecure,” Bron said. “What I like about the notion of pre-silicon security is that it allows developers to design security into their design, design vulnerabilities out of their design, and to see how this security carries over into the implementation of the design.”

Anticipating complex environments
Complexity always has been a big challenge for security experts, and it becomes even more difficult to safeguard a chip if it’s being included in a heterogeneous design with multiple components that are not developed by the same company. That makes testing all the more important, and how that testing is done can make a big difference.

Chips ideally should be tested in a worst-case scenario, with all countermeasures disabled — “without redundancy, without security measures and so on, so the chip is operated in the worst condition for security and we see the pure hardware security features,” said Peter Laackmann, distinguished engineer for the Connected Secure Systems (CSS) Division at Infineon. “This means if you bring the chip into another environment, then the situation should not get worse.”

Still, complex environments can introduce vulnerabilities in other ways. For example, consider a crypto wallet that’s breached despite the presence of a security chip, because that security chip happens to be controlled by a standard microcontroller. “With electrical glitches on the standard microcontroller, hardware wallets were successfully broken, although they have security chips certified according to Common Criteria inside which were not harmed at all,” Laackman said.

Robert Ruiz, director of product marketing at Synopsys, said utilizing PCIe or USB ports to test for defects can introduce vulnerabilities as well. “That technique itself kind of opens up the chip, if not the whole system, to hack, because you’re basically giving hackers entry points into the system through a standard plug-in port… so these new techniques, they’re improving efficiency on design and manufacturing, yet they may actually open up the door a bit,” he said.

Ongoing validation
Testing chips, both on their own and as part of package, is essential. “A die should always be tested in isolation first, and chip makers are doing this,” said Bron. “The testing of all components ‘together’ is what evaluation methods like Common Criteria tackle very well, and we see chipmakers that understand these evaluation processes well enough to be able to derive benefit from this during chip design/package design.”

At the same time, nothing done pre-silicon eliminates the need for validation after the fact. “You would not just build a vehicle without computing the needed functions and the needed safety, and just testing it afterwards,” Laackmann said. “So testing is always mandatory for hardware, and for software, and also in combination. But you can spare some time, and make your results more reliable if you have pre-silicon tests in advance.”

Testing engineering samples or final commercial samples can offer significant benefits, even if it’s too late to fix some potential issues. “Some security vulnerabilities that are discovered this way can be (partly) mitigated in firmware still,” Bron said. “Others cannot, and these are typically learning opportunities to make the next generation of chips more secure.”

Increasing demand for security
Interest in security is trending upward, driven by customers with greater security concerns for everything from smart cards to automotive applications. “The priority is based on the application,” Ansys’ Lin said. “For applications with secret data, with confidential data, of course, security is prioritized higher than the other metrics.”

In the future, it will be possible to test for specific security concerns that are most important for a specific application or user. “You’ll say, ‘My application is this,’ Tehranipoor said. “And the tool automatically will be intelligent enough to say, ‘Okay, got it – I’m going to choose x, y, and z, I’m going to optimize it for you for that – and I’m going to give you a report based on that optimization.”

That kind of specificity is essential for security, which can’t be pinned down to a straightforward, universal metric. “And we’re going to get there,” he said. “We’re not there yet. But we will get there.”

Semiconductor Security Knowledge Center
Security Risks Widen With Commercial Chiplets
Choosing components from a multi-vendor menu holds huge promise for reducing costs and time-to-market, but it’s not as simple as it sounds.
Chip Backdoors: Assessing The Threat
Steps are being taken to minimize problems, but they will take years to implement.
Why It’s So Difficult — And Costly — To Secure Chips
Threats are growing and widening, but what is considered sufficient can vary greatly by application or by user. Even then, it may not be enough.

Leave a Reply

(Note: This name will be displayed publicly)