Keeping up with attackers is proving to be a major challenge with no easy answers; trained security experts are few and far between.
Experts At The Table: Hardware security has evolved considerably in recent years, but getting products to market is a challenge in an environment where threats are always evolving and rarely predictable. That’s especially true given the sheer volume and variety of products being introduced. Semiconductor Engineering sat down with a panel of experts at the Design Automation Conference in San Francisco to discuss how to keep up with attackers, which included Andreas Kuehlmann, CEO of Cycuity; Serge Leef, head of secure microelectronics at Microsoft; Lee Harrison, director of Tessent automotive IC solutions at Siemens EDA; Pavani Jella, vice president hardware security EDA solutions at Silicon Assurance (on behalf of IEEE P3164); Warren Savage, researcher at the University of Maryland’s Applied Research Lab for Intelligence and Security and is currently the principal investigator of the Independent Verification and Validation (IV&V) team for the Defense Advanced Research Projects Agency (DARPA) AISS program; Maarten Bron, managing director at Riscure/Keysight; Marc Witteman, CEO at Riscure/Keysight; Mike Borza, scientist at Synopsys; Farimah Farahmandi, Wally Rhines Endowed Professor in Hardware Security and Assistant Professor at the University of Florida, and co-founder of Caspia Technologies; and Mark Tehranipoor, chair of the ECE department at University of Florida, founder of Caspia Technologies. What follows are excerpts of that discussion. To view part one of this discussion, click here. Part two is here.
L-R: University of Florida’s Tehranipoor, Silicon Assurance’s Jella, Cycuity’s Kuehlmann, Microsoft’s Leef, Synopsys’ Borza, Riscure/Keysight’s Witteman, DARPA’s Savage, Riscure/Keysight’s Bron, University of Florida’s Farahmandi, and Siemens’ Harrison.
SE: Identifying threats is one thing. How do you translate all this into products in a timely and cost effective manner?
Tehranipoor: Solutions are easier when the threat is well known, and this is why 75% of researchers do attack and 25% do defense. As a person who digs for months to find a vulnerability, I’m in a position to tell you how to fix it, too, because I know exactly how and when to cause that vulnerability, and I’m the one that can tell you where you can make the patch. That’s really important. I watched Riscure grow over the past almost 25 years, and the evolution of the product is a good example of what it takes to generate the next product and the next product, as you can see the attacks are evolving. When you think about solutions, I go back to the same thing — be careful with the standards, be careful with solutions. Nothing is set in stone when it comes to security. Attackers are evolving, so solutions have to evolve. In my opinion, the biggest problem that you have to deal with when it comes to security is that companies cannot afford to give you more than 10% of their costs for verification. That’s actually a lot. If I’m your boss and if I tell you, ‘You’re a great designer, you’re a great security engineer, but here’s the thing, I’m only going to give you 5% of my cost, give me the most security that I can get,’ what would you do? A cost-risk analysis tradeoff, right? Then what you need to do is to get as many low hanging fruits with as many high risk results out of the way. The solutions have to deliver that. You can’t just leave that to an engineer to give to you. It’s just not going to happen for a variety of reasons. One, none of those engineers actually understand security. Security has to be given to the security experts so they can actually deliver it to you. Two, it reduces the cost when you do that, because the cost of an engineer and the cost of a license that can be used many, many times are almost identical these days. It’s $200k for a license and $200k for an engineer. The last one is they are time insensitive, which means that a particular solution in an attack today, six months down the road, five years down the road, is still extracting that attack. Yes, there will be new attack vectors that you have to provide solutions for, but what you provide as a solution now is always going to extract that particular vulnerability. But the size of the database of all the vulnerabilities just keep growing. For example, at Caspia, we have the largest database of vulnerabilities that have been identified. We have about 127 vulnerabilities that we’re able to check for today, and the target is 200 because we have identified that many. We took a top-down approach. We took all the CWEs and CDEs and trust vulnerabilities that brought them down to what they call security rules. Also, I want everybody to be very careful when it comes to solutions. Not every security weakness is a vulnerability. Not every vulnerability is an exploitable vulnerability. There may be a vulnerability at the chip level, but the system doesn’t allow you to extract it. There may be a vulnerability in the system, but that system of systems doesn’t allow you to get that. So this is why a top-down approach and bottom up approach is important to understand.
Harrison: I’ve been more focused on automotive, and in that world the systems are really complex. There are lots of different component parts. They’ve already adopted the ISO 21434 standard, which sounds like it’s doing exactly the same thing as the IEEE standard. It doesn’t prescribe a solution. It prescribes an approach as to how to document this, because there’s no point in having different ways to document things and clarify things at different levels of the automotive system. You need that to be consistent as you build up your automotive system from the semiconductor to the ECU and to the vehicle. That needs to be consistent all the way through. So with ISO 21434, I hear people saying all the time that they expect it to give the answer. If I comply to 21434, that’s our security solution. No, it describes how to do the analysis. It gives a consistent way to address all of the modeling and the vulnerabilities so that you can bundle them with the particular component and then pass them up the chain as the vehicle is developed. It makes a lot of sense. We don’t want to build a standard that gives you a solution, because as soon as you do that, it’s going to be wrong and it’s going to be out of date. These standards take so long to put in place. By the time you’ve done that there is no point. Having these kind of open-ended, more guidance-based standards is one step in the right direction. It really helps the EDA companies to automate a bunch of this stuff because we need to be able to produce this collateral in this way, and better feed that into the ecosystem. You’ll start to see that in the same way as functional safety has evolved. Five or six years ago, there was there was no real functional safety tools that were doing that kind of thing. Now, functional safety tools are doing the analysis that generates the reports, and then driving the tools further to actually do the insertion of safety mechanisms and fix the safety problems. Hopefully in security we’re going to head in the same direction, but we’re maybe five or six years behind where software safety is today, so we have a little bit of catching up to do.
Borza: One of the best things about 21434 is it imposes a requirement to produce a tariff — a documented threat and risk assessment that says, ‘Okay, for this particular system, or system of systems, or whatever it is, this is the security posture of the thing by design.’ Just producing that immediately shines light on what is addressed, what’s not addressed, and how did the designers of the system think about the security problem in that context?
Harrison: It tries to get everybody in that ecosystem thinking in the same way. Otherwise, you end up with one group of people with one idea and another group of people with a completely different approach. So at least 21434 is trying to bring everybody onto the same page. From an EDA vendor perspective, it really helps us speed up the development tools and technology.
Bron: This kind of common thinking is really needed. I had a smile on my face when we started talking about solutions and products. Mark knows better than I do, but side channel first came to light in 1996, and then everybody still talked about DES and why DES implementations were vulnerable. Then we got triple DES and we didn’t learn. Triple DES implementations continue to be vulnerable. Then we got AS, and same thing. It’s the next big thing, but it’s really not because we haven’t learned. Recently, we had our first dilithium implementation, military-grade security. It’s the next big thing, it’s quantum resistant, but if you don’t learn the implementations, often they are vulnerable. It ties back to what is the demand driver for having security.
Borza: I was doing the same thing that you were in 2003 to 2004, and even up to today you still hear people who say, ‘Well, I don’t think side-channel attacks are going to be a problem for my system.’ The number of people who are still trying to hide their heads in the sand about it is stunning.
Bron: We’ve come from devices, to smart devices, to smart connected devices over the last 50 years. When smart connected devices first saw the light, security was still optional because there was a cost of security to it and a cost of compliance. Today, security is no longer optional. At least from the conversations that I have with customers, they want to be able to show to the market that efforts have been put in to compliance. The U.S. government is talking about a cyber trust market alliance, which is a voluntary program. But at least it gives manufacturers a way to show to their customers, ‘Hey, we haven’t completely siloed security to death. This may be something that is better.’ The state of the ecosystem, where security is no longer optional, is still very, very young for that reason, and I’m excited to see where it will go in terms of regulation.
Borza: The number of IoT companies that are selling you stuff to put in your house are still playing the game of, ‘Sure we have security.’ That’s still the biggest risk going. Those are you the things that get harvested into botnets.
Tehranipoor: We actually worked for a year with NIST, and we broke into the three candidates that they have for post-quantum crypto using side channel in six months. We got back to them, they gave us permission to publish, and all three of the finalists were susceptible. Everybody knows this thing called HSM, hardware security module. Look at the market size. No matter how much we criticize security, it has to take its course. Let’s not forget, when you guys started, security wasn’t even such a thing. Where we are today is much better than we’ve ever been. In 5 to 10 years down the road, the market size is going to be $5 billion.
Jella: The chasm for me is bringing the cost and time-to-market down. That is the barrier. Once you cross that chasm as an industry, that option breaks, and what will trigger their options rate to go beyond that is definitely AI. But the IP vendors and the designers should be protected. The companies that are designing should be protected under the AI regulations so there are no unlawful things happening. If that is protected, then the AI spectrum can help solve this problem. And that’s where I see the chasm. It’s regulatory stuff going on behind the scenes and bringing the barrier to costs and time-to-market down. It will happen through AI, but AI has to be monitored to protect the vendors in the marketplace. Until then it will be more customization-driven. So the know defense market will come and say, ‘We need this security in this particular chip.’ Automotive actually is guided pretty much by the tools. For any automotive customer the first question is, ‘Do you have this compliance?’ It’s driven by the need at this moment, but until we cross that chasm with these three things in mind, adoption cannot actually scale.
Farahmandi: I agree with a trend in academia that a lot of time should be spent on finding these vulnerabilities. When those vulnerabilities are gated, and we know when and how they are going to be introduced into the design, that is going to be very helpful for enforcing the kind of regulations and standards we are developing. And then you can categorize these vulnerabilities, and for each of them you can create some products. What we need to consider for products and security with EDA is very different than other kinds of products we develop. You might just check for some vulnerabilities at high levels of abstraction, but then the design goes through a lot of more modifications or customization. So same vulnerability that you checked earlier might happen in the later stages. That means you have to have the same products for different levels of abstraction because the new features are going to be added in the design and security of the design when the design changes. So then you have to have a product for each of these levels. You have to understand the new features and new transformations of the design and the history requirements of them. Unless you do the extensive research on what our abilities are, and how these vulnerabilities can happen, you cannot come up with these requirements and products.
Witteman: Many people feel that security should be perfect, but we all know that perfection doesn’t exist. It’s the enemy of good. Ultimately, a product needs to be so secure that breaking it will cost more than an attacker is willing to spend. That means the designer of a product should understand the attacker. You should really be looking at the attacker’s business case. When you understand the attacker, understand how the attack can be successful, you also will know how much to spend in preventing it. This where threat modeling comes into play.
Savage: It seems to me that the general education level of the typical engineer is super deficient. You can’t design for security unless you have some semblance of what could possibly happen to your work down the road. There should be some type of courseware incorporated at the undergraduate level — just one lecture on security to sensitize people. You’re going into this career and these are the types of things you’re going to be worried about down the road.
Borza: Today a designer is still educated on power, performance and, area optimization. Security doesn’t even factor into that.
Leef: In the software world, clearly it’s different. Both my sons are computer scientists, and for both of them at their interviews, totally different Big Tech employers, they were asked to write in the course of an interview Diffie-Hellman key exchange code. That just does not happen on the hardware side.
Tehranipoor: The point that you made is valuable. We actually teach these courses. In UF, between ECE and CSE, we have 18 cybersecurity courses, but a lot of it is computer security, encryption security, network security, x, y, and z. We’re the only institution that actually has five or six hardware security courses. I have a company because I don’t want any of my students to go anywhere. They are going to hardware courses for five years. Why should I lose them to you guys or somebody else when I trained them for five years?
Savage: Can you imagine the impact of a new grad coming into an existing team and asking security questions? What’s the security policy in the room? Blank look. ‘Well, we haven’t thought about security.’ It certainly could percolate up from the bottom.
Tehranipoor: It’s not just a matter of the question, but also the answer itself. You want people who are trained to be able to help you with finding those answers. I have a theory for that. Until we’re there, look for the companies that can give you the right EDA solutions, because at the end of the day, you can never make all engineers understand security. Security is a complex thing. You have to be very careful. You’re going to have to give it to the people who know security, who are the best at it. Asking the right question is important. Getting the right answer is even more important, because you could be very wrong about it. So you’d better have security experts to handle it.
SE: There is still a significant gap in just the knowledge about hardware security. How do you even pick an EDA tool if you don’t even know the right questions to ask? How does the industry fix this?
Leef: Like everything else, it boils down to economics. People here on this panel said ‘I would think some kind of catastrophic breach.’ Occasional jolts to this system do it. There’s also the fact that Mark and others like him are now outputting hardware security experts. Those are impossible to find. Let’s say there’s an automotive merchant semiconductor company, for example, that should be all over secure hardware. Their excuse is ‘We can’t find people, we can’t find engineers who know anything about this.’ In the absence of people speaking up on this topic, championing hardware security, they’re just going to say this is a tradeoff. What’s the likelihood something that is going to happen?
SE: We’ve talked about modeling, we’ve talked about products, we’ve talked about challenges, and we’ve also said nothing is 100% secure. Given that, what is the role of resilience?
Jella: You can’t stop issues at every level 100% of the time, but creating more checkpoints will be extremely difficult in the supply chain. For instance, you have a number of tools in the RTL stage, gate level, GDS, prototyping. From there it goes into actual manufacturing. And when the chip is out in the market, you have to strengthen the checkpoints with lot of options. Media vendors and tools providers bring options to the table.
Mike Borza: I want to address this from the product-centric point of view. Part of high-quality security design for a product is the notion that it’s going to be breached at some time, and the ability to recover from that needs to be built into the protocols and the support for that product. That’s just a recognition that we are imperfect in our security designs and not everything is going to be covered. So the idea of being able to recover a device when it’s breached has got to be built into the product when it ships.
Leef: Anybody who asserts 100% percent assurance is likely a charlatan. Mike is a pragmatist. I was actually looking forward to your answer on this. My thesis has been, you have to trade off economics and the magnitude of the threat, and that is the best we can do. In other words, there will be lawn sprinkler people who will put very little security into this, and there’s going to be people who make guided missiles who put a lot in, but they’re still not going to get 100% assurance.
Harrison: I just come back to safety again. It’s the same analogy. With safety, you can put everything into the device, but at some point, there’s going to be a fault. First, you need that resilience to be able to detect when something is going in the wrong direction. And second, you need to make sure you can recover from that without too much impact on the overall operation.
Witteman: In software, resilience is easier because you can patch software. You can’t patch hardware, so that makes resilience harder. There’s a bright spot here that security is a combination of hardware and software, and sometimes they can correct for each other. Sometimes you can fix a hardware problem or mitigate a hardware problem in software. As long as we keep software patchable, we may be able to mitigate some of the hardware weaknesses.
Read parts one and two of the discussion:
Defining Chip Threat Models To Identify Security Risks
Not every device has the same requirements, and even the best security needs to adapt.
Hardware Security Set To Grow Quickly
As vulnerabilities become more widespread and better known, industry standards will have to keep up.
Leave a Reply