Can The Hardware Supply Chain Remain Secure?

The growing number of threats are cause for concern, but is it really possible to slip malicious code into a chip?

popularity

Malware in computers has been a reality since the 1990s, but lately the focus has shifted to hardware. So far, the semiconductor industry has been lucky because well-publicized threats were either limited or unproven. But sooner or later, luck runs out.

Last year saw two significant incidents that shook people’s faith in the integrity of hardware security. The first was the Meltdown/Spectre flaws found in x86 and Arm processors, to varying degrees. Intel was hit the hardest because of its heavy use of speculative execution, Arm not so much, and different Arm SoCs had varying levels of exposure.

Software fixes were issued, at the cost of performance. And with its recently-introduced Xeon Scalable processors, Intel has fixed the problem in silicon.

The second was an October story by Bloomberg that motherboards sold by U.S. ODM Super Micro had tiny chips embedded in them to steal information. The motherboards were manufactured in Taiwan and China and allegedly used to steal information from the computer it was on. Three months later, a third-party investigator said it found no evidence of spy chips on the motherboards, and Super Micro seemed to weather the storm well. But it hit on a fear that is talked about but rarely discussed at length—high-tech Chinese spying.

It even got to the point that OEMs feared spying through power cables, and one Taiwanese company was actually building a factory in Taiwan to make power components, including cables. It reflects how much Taiwan is trusted and China is not.

But is it all for real? Are there legitimate reasons to be concerned with malicious code or spyware being slipped into not just motherboards but actual semiconductors? As chips go into packages and you start putting together licensed IP to a package, heterogeneous integration can mean you don’t always know where the IP came from. With so much cross licensing and system on chip (SoC) design, no one can claim a total in-house design.

So would it be possible to slip spyware code into a chip and not have anyone notice? The collective opinion, at least today, is that it’s not very likely.

“The government has been very concerned with hardware Trojans that can eavesdrop on data passing through the chip. It’s a worst-case scenario for defense apps,” said Ben Levine, product manager for hardware security at Rambus. “To my knowledge, there has not been one hardware Trojan found in the field.”

He’s not alone in that assessment. “I haven’t heard of anybody trying to sneak code into a design,” said Ranjit Adhikary, vice president of marketing for ClioSoft, which specializes in helping collaborative SoC designs. “I’ve definitely heard of people trying to copy the code, but with adequate security we know who’s copied it. If you don’t have access to the IP and keep trying to download it, we will know that.”

Likely Exposed During Test
In building an SoC, typically the primary vendor is in charge of that module, even though they may have memory from Samsung and some kind of I/O from Qualcomm or Broadcom. However, the primary vendor is still doing the design, the testing process and everything else.

The idea of inserting malicious code is inherently difficult, particularly with hardware. Transistors are tested rigorously in simulations before being taped out, so even if someone managed to sneak in some bad semiconductors, when the silicon comes back it will show up in the silicon testing stage.

“Before you even go to manufacturing, you’ve got your test plan and simulations. To interject silicon at that point and not be visible during the testing process would be kind of surprising,” said Jim McGregor, president of Tirias Research.

He noted that chips in the design phase are well tested in simulations long before the chip goes to manufacturing because it’s so expensive to correct errors with new mask sets. “You’re talking tens or hundreds of millions of dollars if you screw something up,” said McGregor.

Moreover, there are so many people involved in the semiconductor process along the way that it’s nearly impossible for one rogue employee to get away with inserting malicious code. “You’d have to have a very collusive group to get past design, test, and system design to get it to work. It would have to be a high state of collusion to get it through the security process,” he said.

So while fears of back doors are well publicized, it’s are extremely difficult to add extra circuitry into designs. “There is a lot of quality control in place to make sure things are not inadvertently changed,” said Rambus’ Levine. “So if you are trying to insert a Trojan, you would have to subvert the QA process as well.”

Finding a way to insert a Trojan into a device would require an in-depth understanding of the supply chain. “We’re constantly monitoring and stress testing the chip [throughout development],” said Brent Wilson, senior vice president of the global supply chain for ON Semiconductor. “We would see malicious code in test if there was a resistivity or logic change. If something gets inserted, it will impact parameters. Fairly small changes in a processor can make substantial changes in a chip.”

McGregor noted that if a company is doing everything in-house, then there’s no threat of IP exposure until you hand that off to manufacturing—treasonous employees notwithstanding. Where there is potential exposure is if you use a third-party design house like eSilicon, since you hand off your IP to a third party.

But he notes that those firms also have strict security tracking measures because something like malicious code being sneaked into a customer’s chip would be the end of the company.

Security Measures Matter
But even though malicious code would likely show up in testing, it doesn’t mean chip designers can be lackadaisical in handling chip security—and sometimes they are. Experts say you need to be as disciplined with security for chip design as you would be with software.

“Security must be a priority and never an afterthought at every level of the system design and build,” said Peter Greenhalgh, vice president of technology and fellow at Arm. “Prevention also requires users to take more responsibility for their own security. Users must remain vigilant and keep their devices secure by carrying out best practice security actions, such as immediately installing any software updates as they become available from their respective device maker.”

The software development world has a variety of code repositories, such as Visual Studio and Git, that engage in full tracking and logging, check-in/check-out logging, and other tracking measures to keep an eye on code as it goes through the development process.

ClioSoft tracks code and manages permissions. Adhikary said a lot of hardware engineers don’t think they need these capabilities until the wrong code gets into a design. “The basic notion is essentially the same as source code management. The differences lie in the fact that we can handle huge file sizes, and we can handle binaries and look at schemas and layout views,” he said. “What we are hearing a lot more is the need for traceability, because people are not sure where code is coming from.”

ON Semiconductor’s solution is IP audits with its foundry partners. “We found that to be very informative and helpful to know what is on the end of IP and the various foundries,” said Wilson. “We have a dedicated team that goes over different scenarios, what security protocols do they have so a visitor from another company doesn’t have access to your IP, and so on.”

Well-run companies have rigid methodologies and do a lot of verification, Adhikary noted. There’s an audit trail for all files of what was changed. But he has found smaller chip companies are a little more lackadaisical on monitoring flows change-management methodologies, which can come back to bite them. He said a South Korean chip company had to do a respin because an engineer copied the wrong files into the chip source database, at a cost of millions of dollars. The company introduced more rigid database controls after that.

Wilson noted that manufacturing is the lifeblood of a foundry, so they take security very seriously. “I did an IP audit on one of our foundries and was quite impressed with systems and processes they had in place. They carefully segment customers to make sure no designs cross over or there is leakage of IP design from one customer to another. They do encryptions so the file can only be opened inside their plant, so if you took it out of the plant it would not engage,” he said.

Even though most foundries are outside the U.S., Wilson encourages going over the security procedures of any potential partner. “There are different levels of companies in the foundry space at different levels maturity in security. If we have a design we see is important, we only allow top tier foundries with very high security ratings to bid on that program,” he said.

Unintended Vulnerabilities
Everyone interviewed by Semiconductor Engineering agreed that an unintentional vulnerability like Meltdown and Spectre, where a feature is added with a legitimate use but can also be used for illicit activities, is far more likely than malicious transistors being inserted into a chip.

Consider the nature of Meltdown and Spectre. They aren’t malicious code, but legitimate functions within the designs of CPUs that, in the right conditions, can be misused. The real problem was that Intel, AMD, Arm, and IBM (its Power processors) did not realize that memory mapping as it was done could be potentially exploited. At the time, it seemed far-fetched that anyone would target speculative execution or branch prediction—roughly the equivalent of pre-fetch in search that is used in processors to speed up performance—particularly with all of the other security measures already in place on those chips.

“One of the hottest jobs in the security field is working as a hacker would to find exploits like Meltdown, to point out the potential for exploits when companies don’t even realize their products can be used in a negative way,” said McGregor. “That’s how Meltdown was found. Google thought like a hacker and went into chips to find a negative side of the design.”

While multiple research times all seemed to hit on it at the same time, one of the first was Google’s Project Zero, a white-hat hacker group within the search giant that does nothing but look for vulnerabilities.

“Side-channel attacks will continue to be avoided through microarchitecture design that prevents information of previous execution being derived, rigorous use of software tools provided by the architecture such as barriers, memory permissions, and best practice security measures in the OS and applications,” said Greenhalgh.

But Adhikary says it’s unlikely that chip designers will engage in Project Zero-like thinking because the industry has never done that before. “Most companies do not have the bandwidth to do that except for big companies,” he said. “Because of the way the industry is structured, small IP companies are just trying to survive and grow. They don’t think beyond the application standpoint and just focus on how things are used.”

Related Stories
Creating A Roadmap For Hardware Security
Government and private organizations developing blueprints for semiconductor industry as threat level rises.
Using AI Data For Security
Pushing data processing to the edge has opened up new security risks, and lots of new opportunities.
Next Wave Of Security For IIoT
New technology, approaches will provide some protection, but gaps still remain.
Building Security Into RISC-V Systems
Emphasis shifting to firmware, system-level architectures, and collaboration between industry, academia and government.
Blockchain May Be Overkill For Most IIoT Security
Without an efficient blockchain template for IoT, other options are better.



2 comments

Clarisse GINET - CEO @Texplained says:

When it comes to unintended vulnerabilities or backdoors in the semiconductor world, this is a financial disaster for chip makers to patch their solutions, and also a critical issue for our data privacy at personal, industrial and governmental levels.
That is why hardware security tests have to be seriously driven.
On one hand, to prevent from unintended vulnerabilities, chip makers need to partner with independent experts in IC security evaluation with a hacker mindset, who will challenge their design to real world threats.
On the other hand, this expertise in IC exploration is also the best one for IC supply chain verification when chips are manufactured by a third-party. Chips in critical applications as well as in consumer products or industrial systems cannot just be supposed not to contain a backdoor.

John Hallman says:

First comment, security by obscurity is not security at all. Obfuscation may delay an adversary, but if you’ve got enough gnomes in the basement, its just a matter of time.

Second, that testing will discover Trojans – maybe, maybe not. Are you actually looking for the Trojans specifically, or are you testing to your requirements? I’ll admit some mission assurance verification flows are pretty thorough, but they are still a function of time; at some point the manager says verification and test are done. I believe though that there is an opportunity for more specific testing that will focus more on Trojans and vulnerabilities, call it a “layered” verification approach. While I don’t think you will ever find “all” Trojans you will find more if you are looking for them.

Leave a Reply


(Note: This name will be displayed publicly)