Why and how the semiconductor ecosystem needs to come together on security.
Semiconductor Engineering sat down to discuss chip and system security with Mike Borza, fellow and scientist on the security IP team at Synopsys; Lee Harrison, automotive IC test solutions manager at Siemens Digital Industries Software; Jason Oberg, founder and CTO of Cycuity (formerly Tortuga Logic); Nicole Fern, senior security analyst at Riscure; Norman Chang, fellow and CTO of the electronics, semiconductor and optics business unit at Ansys; Frank Schirrmeister, senior group director for solutions and ecosystem at Cadence; and Jamil Mazzawi, CEO of Optima Design Automation. What follows are excerpts of that conversation.
(Top, left to right) Mike Borza, Synopsys; Lee Harrison, Siemens Digital Industries Software; Jason Oberg, Cycuity; (bottom, left to right) Nicole Fern, Riscure; Norman Chang, Ansys; Frank Schirrmeister, Cadence; and Jamil Mazzawi, Optima Design Automation.
SE: Security is a hot topic. Where does the industry stand on chip and system security today?
Fern: The biggest thing about approaching security is that there’s always going to be a tradeoff. There’s always intense time-to-market pressure. Most teams have a fixed budget they can spend on security. It’s just what it is. They have to work with what they’ve got, and the direction that companies should be taking with respect to security strategies is figuring out how to spend that budget in the wisest way possible. That begins with the process of threat modeling. Ultimately, there’s no silver bullet or single solution or single tool or single methodology that will magically solve security for everyone all the time. You know your design the best, you know your market the best, and you just have to put in the effort of doing threat modeling and figuring out where you are with respect to security, where you want to be, what resources you have to work with, and how to best use those.
Oberg: Security is a lot less about technology than the process. There’s obviously a people component to it, but it’s really about going through the process of threat modeling and using that process to define what your security requirements are for the market that you’re serving. Build a verification plan that can validate those requirements, and then use quantifiable metrics to help sign off security. So whether it’s coverage of requirements, whether it’s supporting data from the verification plan, or things like the Common Weakness Enumeration that MITRE maintains, technical solutions are critical because we need automation. We don’t want to manually do stuff, but it’s really the process that’s truly important to build a comprehensive security methodology.
Mazzawi: Today, security verification is performed post-silicon, mostly because mechanisms that can effectively verify security vulnerabilities and execute fault attack simulation at the RTL level do not exist or are extremely slow. What is needed is a specialized layer on top of high-performance fault-simulation that allows the modeling of any type of security attack, along with built-in models for laser and electromagnetics attacks.
Borza: I agree security is a hot topic. I’d say it’s about time. It’s always been the year of security every year for the past 25 years, but we’re starting to see action that’s matching the rhetoric. I don’t think we’re there yet. There’s still a huge amount of the industry that takes the approach of putting its head in the sand, hoping to not get caught. I take it as a good sign that we’re starting to see a lot more adoption, and real effort going into designing security. I’ve said for a long time that the idea of security needs to be architected in right at the base level. If you don’t get it into the silicon, and really deal with it at the silicon level, you’ve missed your opportunity to secure your product with a few things that will stave off some of the script kiddie attacks — some of the simpler attacks. Ultimately it comes back to the silicon, and building in that hard base right off the bat gives you a fighting chance to be able to defend and recover from attacks. Increasingly, that’s important.
Harrison: I’ve been banging the security drum for a number of years, and it’s been in the background. But it’s never really gotten to be the main focus. Now, there seems to be a very much increased level of excitement about security. I’ve worked a lot with test products, and customers are now asking, by default, how security is built into the testing. Then at a different level, I’ve also looked at the automotive vertical market, and that’s a real challenge. When I talk to automotive end customers, a lot of them are still working out themselves what they really need. Again, it’s something that varies across regions, across different countries — things like security type approval for different countries. But we’re getting there, and we’re seeing these requirements emerging. There are things like embedded analytics technology, which is being adopted from a security perspective, and it’s about putting technology in at the low level, in the hardware.
Schirrmeister: Security has a bit of the tenor of, ‘has been, is, and always will be,’ a hot topic of the future. Then, if I look at the verticals we are serving — consumer, hyperscale, mobile, networking, aerospace/defense, automotive, industrial and health — set against 12 technology horizontals, security is one of those technology horizontals right next to safety, low power and higher level elements like AI/ML across digital twins, for example. The interesting situation I see from a solutions perspective is that each of the different domains has a very different focus and priority on security. In aerospace/defense it’s a huge topic, but it means something different than in automotive, for example. And it’s life-altering if you don’t have it in medical/healthcare. So it’s a very important topic, very much biased by the application domain that it’s in. Then from an ecosystem perspective, it’s one of those places where you really need a family of ecosystem partners to work together, and there is a large ecosystem to be dealt with. Further, I was reminded recently at Embedded World in Germany, there is actually only one word for safety and security. That may be indicative on how closely these two are related.
Chang: At Ansys, we started a security initiative about four or five years ago, and as we’ve been talking to customers, we can see there is a very strong interest in pre-silicon security solutions, because once you finish the design and you send it to the security lab, such as a Riscure, they will give you a report that says pass or no pass. It may not often tell you where to fix a problem. By then it’s too late, and it will cause delays of the product introduction in the market. So if we can bring the security solution, especially for pre-silicon security sign-off earlier in the design cycle, all the way to the RTL and system level, that will be perfect. That’s the strong requests from customers who will talk to you. And we are seeing that starting this year. There are some tools coming out I’m sure. Cycuity/Tortuga have tools very early for three four years already. We just announced a tool for power side channel and electromagnetics side channel attacks based on the layout, and the power side channels solution based on the RTL and gate level. We can provide some insight for possible security weaknesses based on the content of the designs where the designers need to take a look at performance security SEO. As EDA vendors, we need to provide more tools for pre-silicon security, verification and sign-off, and that will come in one to three years because the sign-off threat in security is still very, very real. The sign-off includes dynamic voltage drop and electromigration. For the electromagnetic simulation, which is very common, everyone knows what sign-off means. But if you talk to a security designer and mention pre-silicon signoff, they have zero idea what that is. So there’s a lot of education we need to do with all vendors together to get the message out.
SE: What are the standards for chip security today? How do we work within the ecosystem so no one’s reinventing the wheel? Is this possible so that methodologies can be developed today and implemented?
Borza: First off, for real hard nugget security there’s FIPS 140-3 (Common Criteria, and others). Those have been around for a while, but they’re very specialized, very focused on the security of cryptographic modules, which is, as they say, a hard nugget in security. I would say ISO 21434 is the first real industry focused standard that crosses a lot of domains. And while there are criticisms, it’s a good starting point. We’re going to see a lot more of that. SAE 232 is moving in the right direction for system level security. That’s going to be very important. MITRE has done a great job and has expanded its role for CWE, the common weakness enumeration, to take in more of hardware security, encapsulating and communicating the knowledge that we can gain as an industry so that everybody starts with a common basic knowledge.
Schirrmeister: CWE definitely is one. But while standardization is definitely important, we also need to be very careful. It’s really the process we’re standardizing with — CWE and MITRE, and so forth. Then, the ecosystem needs to work together as providers of tools and IP. The IP is supplying it as needed, the tools are used to prove it. It is very important from an ecosystem perspective that we work together, because nobody can do it all alone. To the standardization point, if you do things like HSM (hardware security modules) in automotive design, I’m actually not sure about standardization because everybody wants to have their secure and proprietary elements. So we see the HSM as being very hard to standardize, with the exception of some of the transactional components, such as how to store the keys and so forth. But people really want to have that be as proprietary as possible. It’s really a question of what to standardize.
Oberg: There are two broad business drivers here. One is obviously standardization. It’s very clear that if you can check a box someone is more likely to buy. If you sell in a certain market, if you have to check the box, you have to do it. That’s what standardization can help drive. The other component is really more driven by customer demand for, ‘I want a secure product,’ or maybe they’ve actually had that crisis where it’s actually happened to them. And if you think about defining the systematic process when you have security requirements that are defined up front, part of those security requirements is actually driven by standards like 144, which is emerging. We have some customers that have used our product to help provide evidence to support some of the data that you provide to those standards. That’s a key driver, but it’s not the solution, because there are a lot of classes of vulnerabilities that are hitting others, whether it’s outside of automotive and IoT markets or what have you. That will compromise the organization’s intellectual property, but doesn’t necessarily cause any safety violations. And often standardization and safety get closely aligned, so it’s important to go through that process. That’s where CWE can help a lot, because companies that we talk to have internal requirements, they have processes, they don’t want to disclose it to their customers because it’s, ‘Here are all the specifics of what we’re doing for our intellectual property to prove security.’ But CWE is a very transparent way of communicating publicly because you can say, ‘Here are the weaknesses I care about. We have not addressed these because they’re out scope for our market. We’ve addressed these, and this is how we’ve done it.’ You can provide a lot more open dialogue using that framework. Standards aren’t going to fix the process, but it’s a key business driver for certain markets.
Schirrmeister: So that’s the organizing element for the ecosystem.
Fern: With standards there is a balance. They have to be generic enough to capture a wide range of applications, but also specific enough to be actionable. I’ve seen a lot of phrases in standards that are basically, ‘Do not allow untrusted software to run on the platform,’ but then they don’t provide any guidance as to how to implement secure boot. Standards have to be actionable in addition to being a bit general. There’s also the tradeoff between a standard that’s really specific, but if it’s 1,000 pages long is also not that useful. There needs to be a balance. I like how CWE is structured, in that it provides some structure that you can navigate. It’s not a 10,000-page specification capturing it, but it’s not quite a standard. There’s no entity that’s forcing adherence to how many CWEs you have to cover, so there either have to be laws to enforce adherence to the standards, or some other market drivers to make people implement a standard.
Borza: For side-channel security sign-off, for example, particular industries may mandate a certain level. TVLA (test vector leakage assessment) is probably the best known, but there are other emerging metrics that do a better job than TVLA for a characterized weakness. And there aren’t any real standards around that now. The other thing that’s different about security and a lot of other safety certifications, for example, is that security is always a sliding scale. You start at a high level, and then forever you’re sliding down a hill where you’re becoming less secure, because attack technology evolves and the attackers get cleverer. So the idea is that we can find a point in time, but you can’t define it forward in time without recognizing that you’re going to become less secure over time. You’d like to refresh that security that implies things about how you should build a system, and what you need to do to make it ratable — to make it evolve with the attack technology. Most of the standards are fairly vague about how you implement something versus what you need to do. Nicole said when you get the message to implement secure boot they don’t supply guidance about secure boot or what that means, even what a trusted bootstrap means. But that’s where there’s still space for innovation and for distinction amongst competing solutions. How you actually implement the requirements of the standard is different from the fact that you achieve that standard, and so even in the standards the quality of solutions can still be different.
Mazzawi: Standards are driven by companies, which pull their revenues from explaining the standard after that. So it has to be written in a way that if you don’t understand anything, you don’t have to pay someone explain it to you.
Harrison: We did a lot of work with the ISO 21434. We base a lot of our technology around security hardening, and it’s really not clear in ISO 21434 how to implement hardware security. So it comes back to what you’re trying to achieve, but there’s no guidance as to what we’re aiming to do. We saw a lot of this with ISO 26262. The first version was really vague, and over the years, it’s got a little bit better. People are now starting to understand how to interpret it. I’m hoping ISO 21434 will do the same thing, but at the moment it seems to be more about confusion than guidance.
Related
Security Risks Widen With Commercial Chiplets
Choosing components from a multi-vendor menu holds huge promise for reducing costs and time-to-market, but it’s not as simple as it sounds.
Chip Substitutions Raising Security Concerns
Lots of unknowns will persist for decades across multiple market segments.
Building Security Into ICs From The Ground Up
No-click and blockchain attacks point to increasing hacker sophistication, requiring much earlier focus on potential security risks and solutions.
Hiding Security Keys Using ReRAM PUFs
How two different technologies are being combined to create a unique and inexpensive security solution.
Verifying Side-Channel Security Pre-Silicon
Complexity and new applications are pushing security much further to the left in the design flow.
Technical papers on Security
Nice article, very timely.