New tools and techniques are being developed and can help keep the verification process secure, alongside a firm foundation of good design verification practices.
As designs grow in complexity and size, the landscape for potential hackers to infiltrate a chip at any point in either the design or verification flow increases commensurately. Long considered to be a “safe” aspect of the design process, verification now must be a focus of chip developers from a security perspective.
This also means the concept of trust has never been higher, and the trust and assurance flow never more critical. Organizations such as MITRE, which created and maintain the Common Weakness Enumeration (CWE), play an important role in highlighting vulnerabilities so that companies across the industry can stay informed and respond accordingly.
“We work with a number of customers, especially related to military type activities, that are very concerned about the possibility of a nefarious agent of some kind potentially infiltrating either the design or the verification, or both, and either inserting something, intentionally not finding something, or somehow placing something in there,” explained David Landoll, product manager for formal and static products at Siemens EDA. “There are hiring practices and other kinds of things you have to have in place to make sure you trust your people, but the problem is that today’s technology is getting to be so large, so complicated, that it becomes financially intractable to try to make these designs from scratch. That means you have to pull in third-party designs, third-party IP, and that invites other people who are outside of the purview or the control. How do you then scrub those designs? How can you try to make sure those things are free of Trojans? How can you make sure these designs are free of nefarious intent?”
Security vulnerabilities in the verification flow are notoriously difficult for human engineers to detect due to the complexity and scale of modern hardware designs, said William Wang, CEO of ChipAgents. But new technologies can help.
“AI agents, when properly guided with secure coding practices and care instructions, can significantly enhance security assurance,” Wang said. “By combining static analysis with dynamic compilation and simulation, AI agents can automatically scan for patterns that may introduce side-channel leaks, privilege escalation risks, or design backdoors—surfacing issues early in the RTL stage before they propagate downstream. This agentic approach augments human verification with scalable, intelligent coverage, offering a promising layer of defense in an increasingly adversarial landscape.”
However, there is widespread agreement that boundaries need to be set for agentic AI. “What needs to be built into chip verification from a security perspective starts with a set of functional requirements for a chip or chip function, which is the list of intended behaviors, and occasionally you’ll enumerate parts of the design space or potential design space that you specifically disallow,” said Mike Borza, principal security technologist and scientist at Synopsys. “That means you’re not going to consider those, or they don’t drive your thinking about setting requirements and objectives, and usually that’s where chip designers stop. But in the case of security, you have this whole other space of people trying to essentially break the functionality in an intentional way, so you need to be more thorough in defining that negative space without somebody getting this chip operating in a mode in which it was never intended. Can I get it to transition to a safe state or to recover its security posture and essentially come back into the intended functional space of the original design intent?”
What’s missing
However, putting boundaries around AI is easier said than done, and the numbers substantiate that. “The complexity of today’s designs has made verification very, very important, and if you look at the numbers that are reported, the verification market cap has grown to more than $2.2 billion as of last year and continues to grow at a rapid pace,” said Mark Tehranipoor, chair of the Electrical and Computer Engineering Department at University of Florida and co-founder of Caspia Technologies. “The focus when it comes to logical verification is on functional correctness, power, performance and area optimization. But what’s missing in that verification aspect is the notion of security. The complex designs that we have today, the interaction that we have between different IPs in the design, has made security and its verification extremely difficult and challenging. There are all sorts of vulnerabilities, with more coming in every day, which has made security such an important problem to address.”
Adam Sherer, account technical director at Cadence, noted that users require both secure verification and security verification. “The former is focused on assuring that the verification process itself is secure due to data that we need to analyze to show the design is comprehensively verified. Of course, that includes make sure that only authorized individuals have access but that alone doesn’t speak to the risk. Plans show where the verification effort is deployed, and coverage/metrics reports show where it’s been executed. The gaps can indicate bugs and/or under-verified areas that can lead to potential attack surfaces.”
At the same time, the security verification discussion is enmeshed with the secure verification challenge.
“When customers discuss security verification, they are careful to discuss general approaches to keep their verification process secure,” Sherer explained. “As a result, we often use the Mitre Common Weakness Enumeration to initiate the discussion and that leads to discussions of formal methods, rules-based methods, and negative testing methods. Cadence’s Jasper can formally verify that the design is free from some weaknesses and Cycuity’s RADIX rules-based methods run on simulators and emulators, like Xcelium and Palladium, to address a larger state space. Generating negative stimulus using Perspec can uncover security concerns such as unplanned features remaining in the design. Each of these yield security verification metrics, but those metrics must be integrated with functional verification metrics, and often with safety verification metrics.”
Security efforts have been ongoing for some time, even if not publicly discussed. A number of years ago, DARPA came out with some outlines about how to verify systems, recalled Tim Schneider, director of application engineering at Arteris. “At that level, it was [indicating] separate teams, whereby the verification guys cannot be the same guys that implemented it, and it was basically about having multiple eyes — multiple systems guys, RTL guys, back-end guys — all having eyes on the design. Now, bringing that forward, security issues like Spectre and Meltdown with the Intel CPU, and row hammer attacks against memory, aren’t really inside attacks. Those are attacks from outside, figuring out exploits. To find something inside, you would use a formal tool. In our software, if you’re using this kind of an environment, any changes are updated, so it would be really challenging for a bad actor to slip something in. You also would have to slip something in the verification side to hide the fact that you’d built in an extra state machine, for example. Right now, most of these vulnerabilities are because of different use scenarios or cases that people haven’t come up with.”
So can a bad actor somehow infiltrate the verification process and cause bad outcomes? “This is a valid concern, especially when you outsource verification and even some end of design,” observed Nandan Nayampally, chief commercial officer at Baya Systems. “There are two ways to solve that problem. One, you own everything in-house, which is not very scalable. Or two, security is Root of Trust, so you’re going with a trusted supplier. Is their process trusted? Things like ISO 9009 are starting points. They set the baseline saying, ‘Good,’ but it doesn’t necessarily preclude bad activity. So we certainly think security in the design and verification process is critical, and it has to be owned at some point by you. Otherwise, it doesn’t happen. That also means the trust goes down the line, even with suppliers.”
Potential attack vectors
Mimicking single-event upsets creates another potential vector. “In the safety-critical space you consider cases where the system may be put into an unintended mode of operation, due to accidental upsets to the system, but those tend to be fairly limited to single- or low-integer numbers of upsets that cause the thing to transition to that state,” Borza said. “And because those are accidental situations, if you have something caused by a stray alpha particle, flipping a bit somewhere is a very low probability event. The chances of two of those things happening at the same time in a way that affects the same functional behaviors are very low probability squared, so people tend not to consider that very often. People will consider multi-event upsets, but in low, low quantity, whereas in security, you have the situation that somebody may be trying to do something to overwhelm the system, or they don’t have sufficient control of their fault stimuli to be able to just focus on one thing. So you might upset a whole region of a chip, and all of those things can be related, or can relate to the same functionality. And so you can have this thing go into a massive upset state that you need to plan for and try to develop ways to test whether or not the recovery of that chip or that functionality from those states is feasible, and can you detect it? Can you respond to it? And what can you do? You need layers of defenses for those things, and all of those things need to be tested themselves.”
There are a number of technologies that will go through and scrub a design to try to look for some signature that would “indicate the presence of something that would be unexpected in comparison to the surrounding behavior or logic, or it exhibits a signature that appears to be consistent with a nefarious agent,” Landoll said. “If the nefarious agent was the one running that tool, then all hope is lost. No matter what you do, you’re going to have a weakness.”
Part of this is also a recognition that security is a problem. By all accounts, adoption of security techniques and tools in the commercial space is lagging military applications, usually due to cost pressures.
“This is all about mitigating your risk, like buying insurance,” Landoll said. “At some point you could buy insurance for your house, for all manner of things, but it’s like you’re guarding against something like an alien attack, and you have to ask, ‘Why am I doing this? The chance of this happening is minuscule.’ And that’s what we’re observing. The commercial companies are much more worried about the presence of a vulnerability in the design, which is either a bug that they fail to detect or some exposure that somebody could pick up through a side channel attack or something else. What we’re seeing in the commercial space is a significant interest in those types of security measures, but as far as trust, we tend to see that as more of a program requirement, such that whenever a customer demands that you have a trusted and assured supply line, that’s when that technology tends to get deployed. We are getting interest. We are getting more customers that are saying, ‘Tell us about this.’ But that’s still not the primary market.”
Testing is key
Testing of the design verification defenses is critical. One approach is to inject a fault stimulus into the verification infrastructure.
“You can try to develop theoretical means to detect and respond or detect and recover from those things, in which case you’re often in the realm of formal verification,” Borza continued. “It’s those kinds of techniques that you’re using with a significant amount of imagination to try to make sure that you’ve covered the space adequately.”
The worst way to find out if these methods haven’t worked is a successful attack that’s being propagated through the population of products. “That’s a really bad way to find out that you didn’t do a good job,” he said. “One thing that’s quite common now are Red Teams, which are either independent parts of a company charged with essentially devising and executing attacks on that company’s products, or in some cases, people will outsource that. There are independent Red Teams who contract their services to companies that are willing to pay for them to do that evaluation, and that’s a very strong and worthwhile exercise. It’s expensive, and this is one of the reasons people hesitate to do that, but the companies that are working hard to make sure things are under control often have those groups internally, and they’re given a significant amount of leeway to go after that company’s products, in a controlled way, and one in which they’re not embarrassing the company. It’s part of an aggressive program to make sure you’re weeding out weakness in the product designs.”
One of the best things that can come out of these kinds of exercises is that knowledge can be fed back to the entire design process, which includes design verification and all of the product planning that goes into it. “This includes what was learned in these kinds of successful attacks, and so you make it more and more difficult over time to attack those products, because you’re taking the learnings from successful attacks and turning them into design principles and policies and techniques that can avoid those things in the future,” Borza added.
While an attack is an attack, detecting where it originated typically can be determined with the use of trust and security tools, ultimately formal verification tools – unless the bad actor is within the organization.
“If this nefarious person was actually doing this formal verification portion of it, I don’t know of a way to try to detect that,” Landoll said. “At some point, if you’re using automation to detect these kinds of things, and the person who’s running that automation is the nefarious actor, I don’t know how to get around that — unless you’re talking about a layered approach, where one person is doing it, and somebody else is doing it, and they’re cross checking each other’s work. But that would double the cost, and that’s not terribly tractable. Formal verification is exhaustive, which is why security is a good application for it. What you’re trying find is an absolute negative response, but all the simulation, emulation, or other techniques can do is say the vectors you threw at it didn’t expose this problem you’re worried about. Is there a way to do that? Formal is the only tool that can give you that answer. Other techniques may be used later in the project, but if they detect something, the obvious question would be, ‘Why wasn’t this caught earlier?’ Then you would go back and start looking at tests, and the whole thing would unravel.”
But with concerns about AI being a more powerful adversary, can formal verification still exhaustively protect?
Landoll says the answer is yes, it can. “There are the obvious limitation with designs approaching billions of gates, because formal doesn’t have the capacity to look at that entire thing all at once,” he said. “We do have some technologies that work in conjunction with the formal analysis that allow you to do structural analysis of really, really big designs to make sure the different pieces are isolated and they can detect other mysterious connections between something like a Root of Trust that shouldn’t exist. We can detect those kinds of things even in the presence of a really, really big design. It’s these kinds of techniques that are advancing, and they’re advancing quickly. Ultimately, I don’t think AI has any bigger exposure than just a super clever hacker with infinite time. The biggest issue is that AI accelerates what a hacker can do. You can take what would have taken three years, and they can now do it in three hours, and that creates a much more dangerous situation.”
The biggest exposure is with the RF-type side-channel attacks. “This is because previously, you had to be able to, as a human, discern patterns, and maybe have an oscilloscope or something to try to detect it,” Landoll explained. “Now, you can just run that data through an AI and basically say, ‘Here’s the pattern that I put in. Can you detect a pattern here?’ So we’re going to have to be much more clever in how we guard against those kinds of activities. The techniques are going to have to evolve to mitigate against threats.”
Conclusion
Fundamentally, there’s no substitute for good design practices. “What is the design supposed to do? Then, design it and verify it against those aspects,” Landoll said. “What is it not supposed to do? Think that through and be methodical about that. Invariably, what I’ve seen is that people start off with an inadequate spec, and the people doing the design aren’t really thinking about that spec. They’re thinking more about what they interpret the behavior ought to be. Maybe the spec is incomplete. Then the verification engineers are testing their interpretation of what this vague spec was saying, and everybody is so rushed that people are not doing a good job, and that’s where the vulnerabilities come in. The biggest vulnerability is just a bug. You accidentally left a bug in the design, and a lot of those bugs can be detected with the same verification techniques we’ve had in the industry for 20 years. You just didn’t use them. So you need a solid spec, design, verification process in place before you even think about trying to add these additional security type tools. And if you don’t have that underlying process in place, you’re going to have problems.”
But when it comes to security, nothing is ever perfect. “Even if you have that solid design verification methodology in place, you still need to be thinking about some of these other kinds of vulnerabilities and exposures,” he noted. “And if you’re worried about this, you really ought to be using some of these latest tools and techniques. But if you don’t have that foundation underneath you, ‘How good can you build your house if the foundation is sinking into the ground?’”
By all accounts, what’s coming are a slew of new, targeted security tools that address verification specifically, and other areas of concern in design — most of which leverage AI techniques to boost the task. How this shapes up over the following years will evolve as companies implement necessary security features, from the early planning stages of design, all the way through implementation, verification and beyond.
Leave a Reply