Longer Chip Lifecycles Increase Security Threat

Updates can change everything, whether it’s a system or something connected to that system.

popularity

The longer chips and electronic systems remain in use, the more they will need to be refreshed with software and firmware updates. That creates a whole new level of security risks, ranging from over-the-air intercepts to compromised supply chains.

These problems have been escalating as more devices are connected to the Internet and to each other, but it’s particularly worrisome when it involves cars, robots, avionics, and industrial and commercial equipment. For those applications, chips, systems, and systems of systems are expected to function 15 years or more. But in many cases these systems also are extremely complex. Some are developed using leading-edge node technology and/or unique architectures, as system architects try to squeeze every possible computation per watt out of these devices.

“One of the big problems with today’s systems is that you can analyze what’s going on in that system, but there is nothing to compare it against,” said Helena Handschuh, a security technologies fellow at Rambus. “With hardware Trojans, you need to know how it compares to the original and to other versions of that hardware, but unless you have the original version that’s very hard. Most of this comes through the software supply chain, so it’s very hard to know what’s original. The only way you can figure that out sometimes is if the behavior of the software is weird.”

Longer lifetimes add the dimension of time to the attack surface. Breaches that were difficult to carry out when systems were created can be significantly more successful as vulnerabilities are identified and as attackers and their tools become more sophisticated. The longer these devices are in use, the longer attackers have to upgrade their skills, and the greater access they have to play around with these devices.

“You can design for known threats at that time, and you can be creative in imagining what hackers might come up with,” said Frank Schirrmeister, senior group director for solution marketing at Cadence. “But then it’s a back-and-forth, because the hackers will always try something new. So for the next project, you need to take these items into consideration. Some of this may be upgradeable. In a hardware-software context, it needs to be designed into the hardware.”

To make matters worse, the supply chain for all of these systems is global. It includes startups, not all of which will still be around at the end of these elongated lifetimes. Some startups will be acquired by other companies, while others will cease to exist entirely. In addition, geopolitical shifts could turn companies from allies into potential foes, which has happened with both Russia and China over the past couple of decades.

Risk in updates
The 2020 Sunburst and Supernova malware, downloaded as part of two SolarWinds updates, provide a glimpse into just how difficult security problems are to solve. What made this particular breach so concerning was these updates were certified by the software company, which held contracts with some of the most security-conscious departments in the U.S. government. It led to a massive breach of those systems, providing both control of system through a shell, as well back-door access, which is believed to have been the work of Russian hackers.

“That attack was really scary because the way you address over-the-air updates is to make sure it’s authentic, and you have the infrastructure to validate that it’s an authentic update,” said Jason Oberg, CTO of Tortuga Logic. “It was encrypted, so that if someone were to intercept that update they can’t steal it, and it’s really important to use that kind of foundational stuff. With the SolarWinds attack, they put the malicious code into the update and it was authentic. The company didn’t know there were issues. That’s tough to prevent, because it’s exploiting the issue at the source. The whole infrastructure authenticated it, and it seemed to work.”

That doesn’t bode well for the supply chain. “We’re going to see more of these types of problems, and we’re going to see people messing up the way they authenticate and decrypt the updates at the end point. We are already seeing that with misconfigurations of hardware roots of trust,” Oberg said. “The whole point is to have a secure part of your chip where your keys are stored so it can authenticate the image. You validate, and if everything looks good, you decrypt and then actually load it. But there’s a lot of potential issues that can happen at the hardware level. If you can get access to the private key that signed it, maybe you can spoof updates. Hardware will be at the center of that validation process. But it’s still really hard to prevent an employee from inserting malicious code.”

That risk has persisted since the early days of computing. There are numerous instances of breaches by an employee paid to insert a logger into a keyboard or to upload software into a system, and it has accompanied a debate about centralizing versus decentralizing computers that has continued since the introduction of the PC. The result was more control, but the downside was that companies often lagged behind the latest versions of software and firmware because of potential incompatibilities across a company.

Today, there are so many applications and updates that it’s difficult to keep track of them all. Some are centralized, some are on local devices, and nearly everyone has multiple devices that require regular updates. But alongside of this, there are well-established methods for tracking those updates.

“A common practice is to use a trusted computing concept called attestation,” said Jason Moore, senior director of engineering at Xilinx. “The hardware and software measurement updates are taken and extended to Platform Control Registers (PCRs). Using public-key cryptography, these measurements are securely sent to a host to verify what has been loaded on the device.”

Whether that is sufficient remains to be seen. “Is my IT department looking out for me enough to monitor what these updates are doing?” asked John Hallman, product manager for trust and security at OneSpin Solutions. “Looking at our network traffic, is there something abnormal happening now that I’ve gone and installed the new update? And have I allowed the new update to be pushed without any real checking on our systems?”

New strategies and techniques
It’s important in developing these devices to understand the various risk factors and tradeoffs around security. Newness can help or hurt, depending upon the design and the supply chain.

At least for now, AI/ML chips may have an advantage, because they are basically opaque to the outside world. What’s going on inside the package is generally proprietary.

“Our chip is programmed with our software, and it’s done in a totally different way than what people are used to,” said Geoff Tate, CEO of Flex Logix. “Trying to corrupt the code that runs in our chip would be extremely difficult because the low-level programming information is totally undocumented. Only we know that. We just take high-level neural network models. There’s no publicly available architectural description below that.”

The downside of this opaqueness is that if something goes wrong, it’s more difficult to trace back to the source. But if it’s harder to crack up front, and if the most concerted effort yields only a single, uniquely programmed device, then the value of the attack generally can be assumed to be low.

Still, one chip doesn’t make a system. Increasing levels of heterogeneity require a higher level of abstraction, combined with activity monitors or sensors inside a chip, a package, or at the interfaces between the processing elements, memories, and I/Os. The first step is to establish a baseline of electrical and thermal activity. From there, any aberrant activity can be recorded through thermal sensors or by looking at the amount of data being stored or transferred. When a system is asleep, there should be no activity. When it is awake, that activity should fall within an acceptable range.

“There continues to be research in this area,” said Xilinx’s Moore. “Companies are programming FPGAs with the capabilities to track unusual activity on a device.”

A number of companies are developing such systems. Moortec (now part of Synopsys) and Ansys have developed sensors that can monitor thermal changes in a chip or system. UltraSoC (now part of Siemens), has developed IP that monitors any electrical activity inside a chip. PFP Cybersecurity, meanwhile, has developed technology that remotely monitors emissions from a system and then determines whether malicious code is running on it. And Dover Microsystems has developed IP to monitor software activity on-chip.

“It’s all about secure design,” said Steve Pateras, senior director of marketing for test products at Synopsys. “That includes a secure ecosystem, how to add structures to the design to monitor and ensure security, where to place the monitoring technology, and how to analyze the results of those monitors. We’re looking at secure access to the chip, and secure downloading and uploading of data.”

Synopsys also is looking at the software itself to ensure that communication happens as expected. The challenge is getting companies to buy into security, a problem that has persisted for the past couple decades. In some cases, that doesn’t even require spending money on technology, but it does require building up expertise to be able to design it into a device in the first place.

“With an SoC, you can use the firewalls in the NoC to quietly tag data and tell you if something is not right,” said Kurt Shuler, vice president of marketing at Arteris IP. “But not everyone has the capability to do this. Some companies are more sophisticated about security than others. They look at the security tools for data at rest, such as keyed encryption. But today there is not much there for security around data movement.”

Research also is underway around the globe that uses AI to identify malicious code. The problem is that AI itself changes as it optimizes, making it more difficult to establish a consistent baseline from which to build a security model.

“The scary part is the results of these AI algorithms lack the transparency to determine whether the algorithm is making the correct choices,” said Hallman. “Or maybe those choice are being manipulated by somebody on your particular system. So I’m not really anxious to implement too many AI algorithms at this point. Very few people understand what’s going on with these algorithms. People are happy to use some of these big AI platforms to get things done, but how do you really trust what you’re getting back?”

Despite all of these issues and others, security is being taken more seriously across the chip industry. “In addition to design for test, there is design for security,” said Jean-Marie Brunet, senior marketing director for the Emulation Division at Siemens EDA. “We’re seeing two trends here. First, the verification challenge is increasing, because you need to use different considerations for your device under test than in the past. They can be deeper in terms of the depth of vectors, but it’s most likely global. And second, this requires a tremendous amount of additional capacity for customers. That provides a capacity challenge for a verification provider. But I don’t think there’s a clear winner yet on what methodology works best for security.”

Support falloff
Some of the security issues have nothing to do with the design. Extending the lifetimes of chips and systems means these devices almost certainly will outlive some of the companies that developed key components inside of them. Support may fall off, and the constant stream of security and functional updates that most people grapple with in their cars and with their smart phones and PCs may cease to happen. Or, their IP licenses may expire and be renewed by an entirely different companies.

“Semiconductor IP providers have had this concern for 10 years, but it’s growing and escalating,” said Simon Rance, head of marketing at ClioSoft. “This started with multiple uses of IP, especially with legal agreements. Especially for the bigger IP companies, the high-end IP costs a lot. A lot of companies buy a per-use license. The problem is that it can’t be policed by the IP provider. It’s legally bound, but they don’t know if it’s getting used on more than one design.”

That creates a maintenance problem, and ultimately a potential security risk when IP is no longer supported. “We’re seeing a lot of IPs held on file servers,” said Rance. “They’re not locked down. There’s a management issue that’s missing, which is what we’ve been addressing.”

Conclusion
While most system design teams look at partitioning by function or physical effects, increasingly that also needs to include security. Just packing all security measures into a single device has been shown repeatedly to be insufficient. Spread out over a longer lifetime, that approach becomes a liability.

“There is a lot of interest in splitting things up into different layers and isolating different functions,” said Rambus’ Handschuh. “So you can find CWEs (common weakness enumerations) that are publicly known and you can scan against 80% or 90% of the known weaknesses. But most of those only address bugs. You also need to segment, as you get closer to the heart of the system, who has access to the source code. That provides a little more assurance around your updates.”

Still, when it comes to security, nothing lasts forever. Holes get plugged, and new ones crop up. The key is to stay vigilant and to have plans in place to minimize damage when breaches do occur.

Related
New Security Approaches, New Threats
Techniques and technology for preventing breaches are becoming more sophisticated, but so are the attacks.
Why It’s So Hard To Stop Cyber Attacks On ICs
Experts at the Table: Any chip can be reverse-engineered, so what can be done to minimize the damage?
Dealing With Security Holes In Chips
Part 1: Challenges range from constant security updates to expected lifetimes that last beyond the companies that made them.
Security Gaps In Open Source Hardware And AI
Part 2: Why AI systems are so difficult to secure, and what strategies are being deployed to change that.
Security Knowledge Center
Top stories, special reports, videos, white papers and blogs on security



Leave a Reply


(Note: This name will be displayed publicly)