Chips Getting More Secure, But Not Quickly Enough

Awareness about potential vulnerabilities is increasing, but so is the complexity with heterogeneous integration and the increased reliance on semiconductors in critical applications.


Experts at the Table: Semiconductor Engineering sat down to talk about the impact of heterogeneous integration, more advanced RISC-V designs, and a growing awareness of security threats, with Mike Borza, Synopsys scientist; John Hallman, product manager for trust and security at Siemens EDA; Pete Hardee, group director for product management at Cadence; Paul Karazuba, vice president of marketing at Expedera; and Dave Kelf, CEO of Breker Verification. What follows are excerpts of that discussion. To view part one, click here. Part two is here.

SE: From a security standpoint, are we better off with a unique architectures and unique components, or with the same components that are designed to resist attacks?

Borza: Standard chiplets that can be connected to perform certain functions in a predictable way are representative of a return to standard ICs in some ways. Part of the motivation is to be able to exploit different technologies in optimal ways that are not necessarily compatible with some of the other things you would package together in the same system. And it’s also a very high-density kind of packaging technology. But part of it also is about trying to get some standardized functions so you can have well-tested and well-proven solutions. That’s admirable, because it means that you can now have specialists who do some things very well providing parts of the solution that are common to a lot of applications.

Hallman: Chiplets give us a better playing field to put up better defenses against the many people who are trying to break it. There are a lot more people playing offense, trying to break into the systems, than there are people defending it. By getting back to some of the standardized security building blocks — and you have some crowd-source ability, because you have multiple organizations working together with these different chiplets — you’re providing a better base to build on. The goal is to start with a higher-level bar, and have security monitored down the development path. We’re heading in a good direction. Is this something that can help us level the playing field quicker down the road? That’s the question.

Hardee: We’re seeing a lot of RISC-V architectures these days. Right now, they tend to be at the simpler end, dealing with single issues. RISC-V has been a great choice for root-of-trust controllers, secure boot for systems, and stuff like that. Open Titan was an example of that. But as RISC-V gets applied to more and more sophisticated applications, performance becomes much more of an issue, so the complexity of the RISC-V architectures that people are implementing is increasing. If you look at Arm or x86, they’re not open ISAs. Even the architecture licensees knew they needed to put a lot of effort into verifying the details of implementation of the processor. Some of the RISC-V architectures are going to be a bit more Wild West, and you will need to do more sophisticated things for performance like multi-issue out-of-order execution. We’re going to see huge adoption of formal technology to verify these security vulnerabilities for anyone who’s implementing less-trivial RISC-V architectures, and there are going to be all kinds of vulnerabilities as we start to see these new RISC-V architectures emerge. We really are at the tip of the iceberg in terms of what’s happened with RISC-V so far.

Borza: It feels like ‘Back to the Future’ for RISC-V, because it’s like a replay of what happened with the original RISC engines, where they were all single-issue. The theory was simple instructions, load-store-execute operations. It’s like we’re replaying history from 25 or 30 years ago.

SE: With formal, can you write the assertions as easily now as you could in the past? Are they narrow enough? Or do they have to be wide enough to include a lot of different variables?

Hardee: There definitely are variant issues on the assertions that will change with every kind of side-channel attack, but a lot of the side-channel attacks relating to the processor architectures have some commonalities. If you’re used to designing processor architectures, you’re very sensitive to the vulnerabilities that can exist as you start to do out-of-order execution and you start to make branch prediction and prefetches. There are a lot of processor architects who know very well how to verify that and make it solid. But there also are a lot of people who will be asked to implement RISC-V architectures who are not as familiar with that. So we are going to see some vulnerability to existing known issues, and we’re also going to see all kinds of new vulnerabilities emerge. That’s a safe prediction. The people who are adopting assertion-based and formal methods, who know where the vulnerabilities are in their architectures and how to verify for those — that will continue to be a great growth area, and formal can definitely have a huge contribution. It already has with the existing processor vendors. As RISC-V takes off and the architectures get more complex, that’s just going to grow and grow.

Kelf: When we were designing an SoC in the past, you’d take an Arm processor for which Arm already had done a lot of security work, in addition to Trust Zone, and adding all the mechanisms we’re talking about. A lot of those were figured out inside the Arm processors. So now we’ve got these RISC-V processes coming in all kinds of interesting, elaborate architectures. The burden for checking security shifts from the people building the processors to the people using them. So now they have to be aware of all these challenges, and they’ve got to worry about them at the SoC level. And if you bring in DSPs and other devices to work with those RISC-V processors, that super-compounds the problems.

SE: Can we now control the state space, because every use case and every application is going to be different? So even if you think some technology is going to be used one way, it may be used very differently by different people.

Hardee: The state space is potentially vast, so you’ve got to use the right method at the right design scope to cope with that. Formal has a lot to offer with security verification, but it has to be used at the right scope to be able to comprehensively verify for lack of vulnerabilities at the processor subsystem level. To verify these complex processes and multi-processor systems, emulation is absolutely essential. For anyone who is implementing these processor architectures as they get more and more complex, if you’re not using formal and you’re not using emulation, you’re really not a serious player in this game. Emulation is essential to handle the state space at the full processor and multi-processor level.

Kelf: The question with emulation is how are you going to drive it? How are you going to create a representation of the state space at the system level? Formal tools are great, but they obviously can’t extend out to this very large level. And the whole point behind PSS is trying to create a model of the scenarios and the specification of the design at the system level, and then take that and synthesize it into a representation of the state space. The formal tool actually figures out the state space. But what we’re trying to do is figure out the state space based on the specification, and if you can get the specification reasonably accurate and complete, then you can do a pretty good job.

Hardee: It comes back to what we were saying about negative testing. You have to work hard to make sure enough negative testing is there. Being able to do that with the PSS-based test specification, and having that run on emulation, is definitely going to be a big and growing area. Formal is naturally negative testing, and unless you constrain it otherwise, it’s exercising every combination there is of primary inputs. That the beauty of formal. It’s naturally negative testing, so it’s naturally good for security verification. But the issue is the scope at which you can apply it. Having multiple tools and being able bring that negative testing to the party at multiple levels is key to success going forward.

Borza: This is going to be another great area for AI to make a contribution because you can use it to explore parts of the state space and constrain it, eliminate parts that you think are naturally not going to be the source of your problems. You always have to be careful with that because you’re almost inevitably going to find that you eliminated something you should have investigated more thoroughly. But at least it gives you a chance to start doing some testing out around the fringes that is better than just pure random fuzz testing. It’s a kind of an AI-driven fuzz testing approach to verification, probing around the edges, which is where a lot of the vulnerabilities can be introduced.

SE: What you’re doing with AI is utilizing probabilities that are good enough for whatever you’re trying to apply it to. But how do you make sure that what you’re doing is within the parameters that you’ve set? You’re moving weights around, but an attack potentially can shift those weights, right?

Karazuba: The challenge is how to verify the output is actually what you intended to have. A lot of it is real-world testing and results — actual deployment of devices, whether it’s in a lab or in the field. It’s trial and error. You can synthesize what your results may be, but you really need to test it and see what happens, then maybe re-do your training models or your weights and test it again, and you just keep going until you have something that gets to what is an acceptable result.

SE: Putting all of this together, are you feeling better about security today than a couple years ago, or worse?

Borza: There’s more security, but a bigger threat space.

Kelf: Yes, more security and more of an awareness that we need security. Unfortunately, a couple more autonomous vehicles have crashed, and people are getting much more paranoid, which we need to be.

Borza: We’re seeing more willingness to adopt security from a much broader swath of the industry. People who formerly said, ‘It’s not my problem,’ are not saying that as much anymore. And some of the ones who are saying it are whispering it.

Hallman: We’re still looking for a way to catch up with the attackers. Today, you’re still seeing reports on a daily basis of all the different attacks. How are we able to elevate the defenses in that same exponential manner? We’re still in a ‘fall-behind mode.’ We’re looking still for that revolution to catch up. I still feel like we’re losing ground.

Karazuba: I’m comforted by the fact that security is now an above-the-line problem for most people. Most of the people that we talk to in chip development don’t see security as someone else’s problem, no matter what they’re doing. It can be the smallest IoT controller up to an ADAS system in a car. They recognize that security is important and are taking steps toward it. We all agree those steps are not enough. Those steps need to be larger and more comprehensive. In the United States, there’s more of a of a recognition of the need for security that comes out of the CHIPS Act, among other things. But I am optimistic that we’re on the right path. The big questions now are how fast are attackers growing, and how many attackers are there versus how defensive we are. Is that gap closing or widening? It’s probably closing in some cases and widening in others.

Hardee: This will continue to grow hugely. What keeps me awake at night is that this is an above-the-line problem for people who know they are designing systems that, in some way, need to be high-integrity. There are all kinds of systems owned by people who don’t necessarily think they need to be high integrity. They haven’t seen attacks that threaten national security, personal security, or financial security. There are so many other application domains we haven’t seen being attacked so far, but this is going to be pervasive.

Leave a Reply

(Note: This name will be displayed publicly)