Dealing with cyberthreats is becoming an integral part of chip and system design, and far more expensive and complex.
Security is shifting both left and right in the design flow as chipmakers wrestle with how to build devices that are both secure by design and resilient enough to remain secure throughout their lifetimes.
As increasingly complex devices are connected to the internet and to each other, IP vendors, chipmakers, and systems companies are racing to address existing and potential threats across a wider attack surface. In many cases, security has escalated from an endless series of software patches to an integral part of the hardware/software design process with increasingly lucrative competitive advantage and a growing threat of regulatory action for companies that get this wrong.
“Every second 127 devices are hooked onto the internet for the first time,” said Thomas Rosteck, president of Infineon’s Connected Secure Systems Division, in a recent presentation. “This will lead to a stunning 43 billion devices connected to the internet and to each other in 2027.”
It also will create a huge security challenge. “With this is increased concern from businesses and consumers about what happens to their data as connectivity increases and digital services scale out,” noted David Maidment, senior director of market development at Arm. “Over the last five years, regulation from governments worldwide has matured, and the onus is on manufacturers to meet a growing list of security criteria to ensure these services are trusted, secure, and managed properly.”
Still, a reasoned and measured approach to security includes many considerations. “First, the threat profile of the specific application must first be understood, as well as the specific assets — the data, information, or systems that need to be protected,” said Dana Neustadter, director of security IP solutions at Synopsys. “Are there specific laws, regulations, and/or types of requirements that will influence that solution? In other words, you have to do some homework first. You have to be able to answer some questions, such as whether the product is only concerned about network-based threats, or whether there is a potential for attacks that need physical access. Is there direct network access to that particular product, or is it going to be protected by other parts of the system that can act like a firewall, for example, to provide some level of protection? What are the regulatory or standards compliance or possible certification requirements for security? What is the value of the assets?”
In addition, devices need to be secure under all conditions and operating modes. “You need to protect it when the system is offline because, for example, bad actors can replace external memory or they can steal IP code,” Neustadter said. “They can re-flash the device. You also need to protect it during power up, and you need to protect it at runtime while the device operates. You need to make sure that it’s still operating as intended. Then, when you communicate externally, you also need to protect it. There are many variables that typically influence a security solution, including the particular application. Ultimately, there needs to be a balance in the overall security solution. ‘Balanced for the security’ includes security functions, protocols, certifications, and so on, as well as the cost, power, performance, and area tradeoffs because, for example, you cannot afford to put in the highest-grade security for a battery-powered device. It doesn’t make sense because it’s a lower-cost device. You really have to look for the balance, and all of this will factor into the appropriate security architecture for a chip.”
Others agree. “The first step we take when we introduce security to a device is to evaluate the security assets of the device with regard to its role in the overall system,” said Nir Tasher, technology executive at Winbond. “As we map these assets, we are also rating the attack potential of the assets. Not all assets are immediately identified. Features like debug and test ports should also be considered as assets, as they may have a role in system overall security. Once mapping and rating are complete, we evaluate potential ways to compromise each of the assets, and the complexity involved. The next step is to find ways to protect from these attacks or, at the least, detect them. The last step is obviously testing the final product to ensure whatever protection we have included is functioning properly.”
Secure by design
One of the big changes for hardware and system security is a recognition that it’s no longer someone else’s problem. What used to be an afterthought is now a competitive edge that needs to be baked into a design at the architectural level.
“This fundamental principle emphasizes integrating security into the chip design development process, ensuring security goals, requirements and specifications are identified from the beginning,” said Adiel Bahrouch, director of business development for security IP at Rambus. “This approach demands having a proper threat model, identifying the tangible and non-tangible assets that have value and need protection, proactively quantifying the associated risk based on a suitable risk management framework, and correctly implementing security measures and controls to mitigate the risks to an acceptable level.”
In addition to proper threat assessment, Bahrouch said it is vital to consider additional security principles for a holistic defense-in-depth strategy. That includes a chain of trust, where each layer provides security foundations that the next layer can leverage, as well as domain separation with different security levels for different users, data types, and operations, allowing optimized performance and security tradeoffs for each use case. In modern systems this includes threat modeling throughout the product lifecycle, and a principle of least-privilege that segments access rights and minimizes shared resources.
“It’s critical to define the security architecture of the SoC up front,” said George Wall, group director of product marketing for Tensilica Xtensa processor IP at Cadence. “The time to define security architecture is when the designer is working out the necessary functionality, feeds and speeds, etc., of the SoC. It is always much easier to do this early than to try to ‘add security’ later on, be it a week prior to tape-out or two years after production ship.”
A holistic defense extends well beyond just the hardware. “If you’re trying to secure something, even at high levels of abstraction for software, you might do that perfectly in Python or whatever your programming language of choice is,” said Dan Walters, principal embedded security engineer and lead for microelectronics solutions at MITRE. “But if it’s undermined by a compromise at the hardware level, then it doesn’t matter. You can completely compromise your entire system even if you have perfect software security.”
In most cases, attackers take the path of least resistance. “With security, it’s all or nothing,” said Walters. “The attacker only has to find one flaw, and they don’t really care where it is. It’s not as if they are saying, ‘I want to defeat the system by just undermining the hardware or I want to do it by finding a software flaw. They’re going to look for the easiest thing.”
Best practices
In response to the widening threat landscape, chipmakers are punching up their list of best practices. In the past, security was almost entirely confined to the perimeter of a CPU. But as designs become more complex, more connected, with longer lifespans, security needs to be thought out much more extensively. This includes a number of key elements, according to Lee Harrison, director of product marketing for the Tessent division of Siemens EDA:
Fig. 1: Key elements of secure hardware. Source: Siemens EDA
Which of those best practices is deployed can vary greatly from one application to the next. Security in an automotive application, for example, is far more of a concern than in a smart wearable device.
“Autonomy needs connectivity, which drives higher security, which enables automation, and semiconductors are really at the foundation here,” said Tony Alvarez, executive vice president of Infineon Memory Solutions, in a recent presentation. “Sensing, interpreting the data, making decisions on it, that’s above the surface. That’s what you see. The system complexity is going up, but you don’t see what’s below the surface, which is all the pieces required to make it happen — the entire system solution.”
That system includes communication with the cloud, other automobiles, and infrastructure, and it all has to be instantly accessible and secure, Alvarez said.
The security needs may be very different for other applications. But the process of determining what is needed is similar. “First, analyze what the assets are,” said Winbond’s Tasher. “Sometimes there are none. Sometimes, every part of the device is, or contains, an asset. In case of the latter, it might be advisable to start from scratch, but these are rare cases. Once assets have been mapped, the architect needs to analyze attack vectors for these assets. This is a tedious stage and external consulting is highly recommended. Another set of eyes is always a good thing. The last stage from architectural point of view is coming up with protection mechanisms for these attacks. And yes, deep system knowledge is essential in all these stages.”
Arm is a strong proponent of including a root of trust (RoT) in all connected devices, and deploying security-by-design best practices as a baseline for any viable product. “The RoT can provide essential trusted functions such as trusted boot, cryptography, attestation and secure storage. One of the most basic uses of a RoT is to keep private crypto keys with encrypted data confidential, protected by hardware mechanisms and away from the system software that is easier to hack. The RoT’s secure storage and crypto functions should be able to handle the keys and trusted processing necessary for authentication of the device, verifying claims and encrypting or decrypting data,” Maidment said.
Given that very few chip designs start from a blank sheet of paper, a chip should be architected to take this into account. “Every device and every use-case will be unique, and it’s imperative that we consider the needs of all systems through threat modeling,” Maidment explained. “Some use cases will need evidence of best practice. Others need to protect from software vulnerabilities, and still others will need protection from physical attacks.”
Arm’s work co-founding PSA Certified has demonstrated that different silicon vendors are finding their own unique selling points, and deciding which level of protection they want to provide. “It’s difficult to have a one-size-fits-all, but an agreed-upon common set of principles is an important tool for reducing fragmentation and achieving appropriate security robustness, and that is proven in the 179 PSA Certified certificates issued to products today,” Maidment noted.
This is very different from years ago, when security could be added into a chip later in the design cycle. “It’s not like in the old days where you could just add security. You’d rev a chip, and think that without making any significant changes you could address security as required for your application,” Synopsys’ Neustadter said. “It’s important to design a security solution based on the premises mentioned above. Then you can have a more streamlined process for updating security in future design revisions, such as creating a secure environment with a root of trust to protect sensitive data operations and communications. There are ways that you can create a security solution that is scalable, extendable, and even update it post-silicon, for example, with software upgrades. So there are ways to build this into a design and then upgrade security from one revision to another in a more streamlined fashion.”
Security fixes
In almost all cases, it’s better to prevent attacks than to provide patches to fix them. But how this is done can vary greatly.
For example, Geoff Tate, CEO of Flex Logix, said multiple companies are using eFPGAs for security, with others evaluating them. “As we understand it, there are several reasons, and different companies have different security concerns,” Tate said. “Some customers want to use eFPGAs to obfuscate their critical algorithms from the manufacturing process. This is especially true for defense customers. Security algorithms, i.e., encryption/decryption, are implemented in multiple places in one SoC. The performance requirements vary. For very high-permanence security, it needs to be in hardware. And because security algorithms need to be update-able to handle changing challenges, the hardware needs to be reconfigurable. Processors and software can be hacked, but it’s much harder to hack hardware, so having some of the critical portions of the security function in programmable hardware is desirable.”
Without that built-in programmability, fixing security issues after an attack is much more difficult. Typically that involves some type of patch, which is usually a costly and sub-optimal solution.
“In line with the domain separation principle, one could consider adding a stand-alone ‘secure island’ to the existing chip in a modular approach, allowing the chip to leverage all security capabilities provided by the secure island with minimum changes to the existing chip,” said Rambus’ Bahrouch. “While this is not the most efficient solution, the security IP can be customized to meet the security requirements and the overall security objective. Despite the challenges, an embedded security module doesn’t necessarily need to be leveraged for each function immediately. Architects can start with fundamentals protecting the chips and users’ integrity, and gradually introduce hardware-based security with side-channel protection to additional, less crucial functions.”
Siemens’ Harrison noted that adding security into an existing design is a common problem today. “If designers are not careful, adding security as an afterthought can easily lead to a scenario where the main entry point is not secured. However, EDA can be extremely helpful here, as embedded analytics technology can easily be integrated into the lower levels of a design that already exists. As opposed to just targeting peripheral risks, IP monitors can be added to monitor many of the internal interfaces or nodes within the design.”
Bare minimum hardware security
With so many options, and so much advice, what are the absolute must-haves for a secure chip from its foundation?
Cadence’s Wall said that, at a minimum, there needs to be a security island that establishes a root-of-trust for the SoC. “There also needs to be authentication functions available to properly authenticate boot code and OTA firmware updates. Ideally, the SoC has selected resources that are available only to firmware that is known to be trusted, and there is a hardware partition that prevents unknown or untrusted firmware from accessing those resources maliciously. But ultimately, the “must-haves” are driven by the application and use case. For instance, an audio playback device is going to have different requirements than a device that processes payments.”
Additionally, based on the threat assessment model and the assets that need to be protected, a typical secure chip will aim to achieve security objectives that can be grouped into the following classes — integrity, authenticity, confidentiality, and availability.
“These objectives are typically covered by cryptography, combined with additional capabilities such as secure boot, secure storage, secure debug, secure update, secure key management,” Rambus’ Bahrouch said. “From an SoC architectural perspective, this generally starts with protection of the OTP and/or hardware unique keys and identities, followed by protection of security relevant features and functions during the product lifecycles, which includes but is not limited to firmware updates and secure debug. A hardware root of trust is a good basis for these fundamental functions, and a must-have in modern SoCs, independent of their target market.”
Regardless of the application, there are two critical elements that should be enabled on all devices use in a secure application, Siemens’ Harrison said. “First, secure boot [as described above] is required, as any other security mechanisms implemented are a potential attack surface until the device has been successfully and securely booted. For example, before the device is booted, it could be possible to override and reconfigure the signature register in a root of trust IP, essentially spoofing the device’s identity. Second, a secure identity is required. For instance, a root of trust, while not commonly used, can give the device a unique identify and enable many other functions to be secured to this particular device. These are the bare minimum, and do not protect against any malicious communications or the manipulation of any external interfaces.”
Trying to come up with a list of must-haves here is difficult as these vary with chip functions, technology, assets in the device, and the end application. “However, as a rule of thumb, one would find three major functions — protection, detection and recovery,” said Winbond’s Tasher. “It’s protection in the sense that the device needs to protect from data breaches, unlawful modifications, external attack, and manipulation attempts. Detection, since security mechanisms should be able to detect attacks or unlawful modifications to internal functions and states. The detection may trigger simple response in some cases or go to the extent of completely eliminating the device functionalities and erasing all internal secrets. And recovery in the sense that with some security functions, it is essential that the system will be in a known state at all times. Such a state may even be complete shutdown, as long as it’s a safe and steady state.”
Conclusion
Finally, it’s critical that engineers become much better versed in how and where to add security into their designs. This starts with engineering schools, which are just beginning to incorporate security into their curricula.
MITRE, for example, runs a “Capture The Flag” contest each year, open to high school and college students. “In 2022, we had as part of the competition the concept that the underlying hardware could be compromised, and we asked students to design their system to try to be resilient to a potentially malicious hardware component built right into their system,” said Walters. “We got a really interesting response. A lot of students asked, ‘What are you talking about? How is this even possible?’ Our response was, ‘Yeah, it’s hard, and it does feel almost like an impossible ask to deal with that. But that’s what is going on in the real world. So, you can bury your head in the sand, or you can change your mindset for how to design a system that is resilient, because when you get out into the workforce that’s what your employer is going to be asking you to think about.'”
Security always begins with understanding the unique threat profile for each application, as well as a clear view on what is important to protect, what level of protection is needed, and increasingly what to do if a chip or system is compromised. And then that needs to be backed up to what is actually at risk.
Synopsys’ Neustadter observed that in IoT, for instance, there is a huge spectrum of costs, complexity, and sensitivity of data. “IoT endpoints, at a minimum, need to be secure and trustworthy,” she said. “At a minimum, developers should test the integrity and authenticity of their firmware software, but also take a reasonable response if there is a failure to those particular tests.”
In the automotive segment, successful attacks can have very serious consequence. “You have the complexity of the electronics, you have the complexity of the connectivity,” Neustadter said. “It’s old news that a car was able to be controlled remotely with a driver in it while the car was on the highway. Hence, security in the cars is critical. There, you need to implement higher-grade security, and usually there you also need higher performance. It’s a different kind of approach to security than IoT, and not that it’s more important, it’s just different.”
And this is very different, still, from security in the cloud. “Unauthorized access to data is one of the biggest threats there,” Neustadter said. “There could be many other threats including data leaks. A lot of our financial data is in the cloud, and there you have another level of security approach and resilience against physical attacks, fault injection, etc. I would like people to pay more attention to, what are the threats? What do you want to protect? Then, with all the other things in the picture, you define your security architecture. That’s something that people don’t look at from the beginning. It can help them get off to a better start, and avoid having to go back and redesign a security solution.”
— Susan Rambo and Ed Sperling contributed to this report.
Related Reading
Bug, Flaw, Or Cyberattack?
Tracking the cause of aberrant behavior is becoming a much bigger challenge.
What’s Required To Secure Chips
There is no single solution, and the most comprehensive security may be too expensive.
Hi Ann, very interesting read. Digital Security by Design (DSbD) is a UK government initiative that goes into this direction, architecting a secure chip from the ground up using CHERI as a ISA.
Please get in touch if you want to know more about this. We have an open call running until April 1st calling UK tech businesses to experiment with the prototype hardware designed by Arm. International companies can request the hardware too but via a different route.