Chip Backdoors: Assessing the Threat

Steps are being taken to minimize problems, but they will take years to implement.

popularity

In 2018, Bloomberg Businessweek made an explosive claim: Chinese spies had implanted backdoors in motherboards used by some high-profile customers, including the U.S. Department of Defense. All of those customers issued strongly worded denials.

Most reports of hardware backdoors have ended up in exchanges like these. There are allegations and counter-allegations about specifics. But as hardware becomes increasingly used in safety-critical and mission critical applications, and as the attack surface widens from software to the underlying hardware and its global supply chain, the threat of backdoors is being taken much more seriously than in the past.

Surveying the landscape
So how serious is this issue? John Hallman, product manager for trust and security at Siemens EDA, said the fact that hardware backdoors aren’t being discussed publicly on a regular basis doesn’t mean they don’t exist.

“The hardware community is much more tight-lipped than the software community, so in an open forum, there really are no documented cases where somebody is maliciously implanting a backdoor that could be exploited. That’s the party line, and that’s where we are today in hardware,” Hallman said. “With what has been done in the research community, though, it is very possible.”

One example is A2: Analog Malicious Hardware [PDF], a 2016 paper by researchers at the University of Michigan that describes a particularly stealthy hardware attack leveraging as little as one gate. Matthew Hicks, one of the report’s authors, is now an associate professor at Virginia Tech. He said the hardware community’s reluctance to discuss these issues is understandable, because flaws like these can be both unpatchable and powerful.

“You can have otherwise perfect software,” said Hicks. “But if the hardware is vulnerable, the software now has a vulnerability in it.”

To some degree, the difference between a backdoor and a design flaw comes down to semantics. There’s always the question of whether it’s a backdoor, a bug, or a design choice. As a result, major vulnerabilities like Spectre and Meltdown, could be perceived in different ways because they take advantage of speculative execution and branch prediction, which until recently were considered good techniques for improving performance.

“One could argue these were unforeseen consequences of design choices made at some point in the past,” said Maarten Bron, managing director of Riscure. “Others may insist these are in fact backdoors.”

Waves of insecurity
Vulnerabilities like Spectre and Meltdown are just the first wave in a logical progression of hardware insecurity, with the design phase as the ultimate target.

“The design stage is very akin to software,” said Hicks. “It’s a lot easier to insert a hardware Trojan in an IP block or some kind of hardware description code than it is at the foundry level. So that’s where I would think you’d see the next wave of attack. And that would eventually go to the untrusted foundry problem, where it’s harder to insert a stealthy and controllable Trojan.”

This is especially worrisome with the advent of a commercial chiplet marketplace, because it will be harder to trace the origin of all chiplets used in a heterogeneous design. Moreover, as one layer in a design is increasingly secured, attackers logically will go to the next layer down.

“If you want to beat a security system, you go to the fundamentals that that security system is based upon, and you violate one of those fundamentals,” Hicks said.

That process naturally leads from software to hardware – and even to the analog aspects of otherwise digital hardware, the subject of some unsettling experiments in Hicks’ lab. “We have hardware Trojans in my lab that are different for different systems, because different systems have their own analog footprints,” Hicks said. “We harness that variability in analog fingerprints to create a different set of events that need to happen to trigger this Trojan. So I could say, ‘I think I have a hardware Trojan on my system. Can you set it off and verify it?’ And when you try to verify it in your system, because your system’s different in the analog domain, it won’t trigger the Trojan on your system even though you have it. That’s a dimension of stealth that has never been seen before.”

That degree of stealth can make hardware threats particularly attractive to hackers. “These are the types of things that you don’t really see in software,” he said. “There’s a lot of value from an attacker’s perspective in getting a hardware Trojan out there.”

Leveraging debug mode
Still, most backdoors at this point are likely far more straightforward. A chip’s operating modes can introduce backdoors on their own. “There’s a manufacturing mode where everything is open, and there’s a production mode where everything’s locked down,” said Cycuity co-founder and CTO Jason Oberg. “Sometimes this is configurable with software, sometimes it’s configurable with blowing eFuses, and so on. If an adversary — whether it’s a nation state or not — is aware of that capability and they can re-enable that, it can introduce a backdoor.”

Scott Best, director of anti-tamper security technology at Rambus, pointed to Christopher Tarnovsky’s 2010 demonstration of a smartcard security chip being glitched into an insecure state as an example of this kind of exploit, bypassing a chip’s security mechanisms by transitioning it from its secure mission mode to an insecure mode. “Once in this state, some of the provisioned data that was supposed to remain secret (e.g., key material) can be recovered by an adversary,” he said.

And with chips and chiplets constantly increasing in complexity, Best said, these issues are only becoming more challenging. “The more complicated a chip is, the more difficult it is to get everything working correctly without a substantial number of different debug modes,” he said. “As for chiplets, a system-in-package that relies on the heterogeneous integration of multiple chiplets from different vendors is, to a large degree, going to have the overall security of the most insecure chiplet.”

Securing the supply chain
Concerns like those have led to the recent passage of the CHIPS Act of 2022 to bring semiconductor manufacturing onshore. Still, Hicks said, even with its passage, it’s going to take a while to change the landscape. “There’s still going to be a half decade at least of lag time between when the funds get appropriated and we actually start producing chips out the front door,” he said. “So we will be weak, or subject to the untrusted foundry problem, for the foreseeable future.”

What’s more, keeping manufacturing within the United States isn’t the panacea it might appear to be. “There are spies, and there are ways to leverage people,” Hicks said. “We’ve had all these problems in the past — and currently, with actors being sponsored by China – so it would be unrealistic to assume that that wouldn’t apply to an onshore foundry,” Hicks said.

Regardless of where a chip is manufactured, third-party IP also can present a challenge. “Within the semiconductor ecosystem, they’re licensing IP from third parties, integrating it in – and as it stands today, there’s not a lot of transparency about the security of that IP,” Oberg said, noting that Accellera’s Security Annotation for Electronic Design Integration (SA-EDI) standard [PDF] aims to address that concern by enabling IP vendors to communicate the security of their IP to chip integrators.

Audits and analysis
The Common Weakness Enumeration framework’s addition of support for hardware vulnerabilities in 2020 has helped manufacturers anticipate some of these threats. Several CWEs now address the issue discussed above of an attacker changing a chip’s operating mode.

“You can only do as much as you can, and if an adversary is really, really good, and they know the design, and they know what weaknesses are being verified, they can try to navigate around that,” Oberg said. “But that’s a pretty methodical and measurable approach to try to combat something like this.”

Various methods of comparison also can help to combat these threats. “We look at, in the counterfeit world, comparing chips from the same lot, making sure that functions are within some tolerance of each other,” Hallman said. “We can do that at a logic level and look at behaviors of chips. Are they behaving the same? Or are there some anomalies that we might be able to detect within those types of comparisons?”

Still, Hallman said it’s challenging to scale that kind of analysis for the complexity of today’s SoCs. “But do you really need that whole SoC analyzed, or is there just a portion that you can analyze? That’s where I think you can start breaking down that problem from the top level, pushing down those functions in the chip, and isolating. Where is it that this is really a threat to me? And what analysis methods can I apply at those lower levels?”

It also means putting in mechanisms to authenticate that analysis. “Having audit trails is good. Having the documentation of data that supports the results of that audit is better,” Hallman said. “And then, having access to the data beyond that — not to violate IP rights of that data, but to be able to give the end user the confidence that what was produced with that data is indeed what they were expecting — that’s a process that’s not yet in place as far as electronics goes. But I believe it’s a direction in which this industry is headed.”

Working toward zero trust
The holy grail for this type of problem is to be able to provide formal proof. “If I want to know that a specific register is not accessible, and I can run a proof that logically shows there’s no access – no physical way or no logical way to interrupt that register operation – that’s the confidence that I believe a lot of our customers are looking for,” Hallman said.

Hallman was one of the authors of a 2021 paper [PDF] for the National Defense Industrial Association (NDIA) on applying zero-trust principles to hardware. “Right now, though, it’s just too much work to implement that type of thoroughness at the hardware,” he said. “We really need to focus the verification efforts at where the threats are.”


Fig. 1: Microelectronics threat landscape: Source: NDIA

In the interim, that means clarifying priorities and focusing on what’s most vulnerable. “There has to be some type of filtering, at a system level, of, what are my critical components, what are the threats I need to be worried about, what efforts do I need to mitigate against those particular threats? We’re not able, at least at this point, to do that thorough, zero trust type model – but there are at least visions of how that could be implemented in an IC build, from RTL on up through fabrication of a die and even into packaging,” Hallman said.

A growing awareness
Ultimately, Hicks said, it’s encouraging to see how the industry as a whole is finally paying attention to these issues . When he first went out into the job market and shared his work on chip security, manufacturers simply weren’t interested. “I basically got laughed out of the room, saying, ‘Why would anyone do this?’ And now these big companies are actually taking this problem seriously, and are interested in partnering with my lab and other labs around the country to solve these problems.”

The past few years have seen a significant shift in both awareness and action. “In 2016, the government really started to panic about this,” he said. “They started to realize the threat is real, and that has only continued to build momentum. And that’s why we see this current push towards onshoring. The community of people that care about hardware security is growing rapidly.”

Oberg said semiconductor companies have historically been focused on things other than security – the next generation of chip, time to market, revenue – but that’s changing fast. “If you don’t make security a first-class concern across your process, what’s going to happen is you’re going to build things quickly, and you’re going to get hit way down the line – and it’s going to be really hard to recover from it,” he said. “It’s hard to fix these things later.”

Related
Security Risks Widen With Commercial Chiplets
Choosing components from a multi-vendor menu holds huge promise for reducing costs and time-to-market, but it’s not as simple as it sounds.
Chip Substitutions Raising Security Concerns
Lots of unknowns will persist for decades across multiple market segments.
Securing Heterogeneous Integration At The Chiplet, Interposer, And System-In-Package Levels (FICS-University Of Florida)
Verifying Side-Channel Security Pre-Silicon
Complexity and new applications are pushing security much further to the left in the design flow.
Finding Hardware Trojans
Why locating security threats in hardware is so difficult.



Leave a Reply


(Note: This name will be displayed publicly)