Formal Verification’s Usefulness Widens

Demand for IC reliability pushes formal into new applications, where complex interactions and security risks are difficult to solve with other tools.

popularity

Formal verification is being deployed more often and in more places in chip designs as the number of possible interactions grows, and as those chips are used in more critical applications.

In the past, much of formal verification was focused on whether a chip would function properly. But as designs become more complex and heterogeneous, and as use cases change, formal verification is being utilized in everything from assessing the impact of partitioning to tracing the source of silent data corruption. It is even being used to identify possible vectors for cyberattacks, and to trace how power is delivered and used within a device.

“Low-power design has been around for a very long time, but it was an approach mostly embraced for mobile chips,” according to Sean Safarpour, R&D group director in Synopsys’ EDA Group. “Now it’s across the board. Similarly, we’re seeing more AI/ML used in accelerators, and the amount of work you can do per power is a major driver. So now for everybody, whatever they’re doing, low-power design is such a big concern. What this has to do with formal verification is that there are specific ways formal verification can be used to solve some of those problems.”

This is no longer just about clock gating optimization, which involves re-running simulation regressions to make sure everything is correct. Today, power needs to be considered in the context of other components in a system, and sometimes between systems. Without formal that would take significantly more time, if it could be done at all. It also needs to be examined in the context of different use cases and workloads, where power needs to be traced from delivery to utilization.

“If you’re doing clock gating, we can do the analysis with and without the clock gating and tell you definitively whether the functionality is the same, or whether you may have a bug and something broke along the way,” Safarpour said. “Based on user requests, it is now also possible to take power intent into account through a UPF file. Five or six years ago, we didn’t have any such requests. Now they are so prominent, people are saying, ‘I do property verification. I have my assertions. But now I want to check those assertions in the presence of my UPF. I do SoC level connectivity checking,’ which is another specific application. But that is incomplete unless you take the full UPF of the design into account. Formal is able to go into these very specific areas so users can say, ‘Wait a second, we don’t need to redo our entire regression suite. We can take these specific problems and solve them exhaustively.’ That’s the real value of formal. It’s exhaustive in nature, so when you’re looking at your entire test plan, and you’re looking at some of these aspects, whether it’s low power, or security, we can go into all these. You will have specific line items for these in your test plan. Some of those you can go and solve exhaustively with formal verification, which is one of the reasons why it is growing in prominence these days as the problem space gets larger.”

Put simply, coverage gaps that were acceptable in the past when there were fewer features and lifecycles were shorter are unacceptable in many applications today.

“Everybody’s always talking about ever-increasing complexity, and a lot of people will rightly use that as a reason to do things differently,” said Chris Giles, head of product management for static and formal solutions at Siemens EDA. “When we look at market success, and the general development process, what we see is that despite a multiple-decade-long increase in investment in verification, it’s not keeping up with the increase in complexity. So while we all recognize there’s a constant increase in complexity, how we’ve responded to it as an industry is not as effective as it needs to be. The shift seen in the last few years is in what the world is demanding of silicon, and this is where formal verification is becoming an absolute requirement. What the world is demanding of silicon today is a different verification approach. What we’re demanding isn’t about complexity anymore. It’s about safety. It’s about security. It’s about trust. It’s about assurance. And all of these things are very hard to prove with a linear simulation methodology.”

This, in turn, has created an explosion in demand for formal verification. “This is where formal excels,” Giles said. “Because of its exhaustive nature, formal verification can prove conclusively that a design is secure, safe, or trustworthy. There’s a lot of work that’s being done in the industry to certify these properties, so having a flow that generates that certification is critical. This is where formal is absolutely essential. If you look at the IP industry, IP companies don’t necessarily know what end product their IP is going to be in, so they have to be planning for those types of questions. How do I know that this is a secure design? How do I know that this is a trustworthy design? How do I know that it’s safe to be putting human lives in under its control? These are things that even IP teams have to worry about today.”

Pete Hardee, product management director in the System & Verification Group at Cadence, said he has seen phenomenal growth in formal verification usage over the past decade. “No doubt it’s ‘crossed the chasm’ into widespread mainstream use. It’s an indispensable tool in the verification armory for the vast majority of top semiconductor companies.”

Increasing complexity, both for advanced-node SoCs as well as heterogeneous assembly in a package, requires more verification. “The verification need tends to expand exponentially with design complexity,” Hardee said. “The key effects have been twofold. First, with IP-based hierarchical verification, the divide-and-conquer has been a major factor in verification success – thoroughly verify the IP block or subsystem, then check for correct integration and nothing got broken at the next level up. For IP-based verification, formal verification has scaled to achieve formal signoff for many, but not all, kinds of IP. State space still can be an issue, despite major strides in formal scalability. For example, complex serial protocols remain challenging for formal verification – the sequential depth is often too high. Some formal techniques can scale to the chip-level, but only in some limited ways – full formal signoff for large digital SoCs remains too challenging.”

Processor-based designs have always been challenging, but they are particularly difficult to verify when they utilize domain-specific architectures. “There is a huge increase in massively-programmable designs – multi-processor architectures using many homogeneous or heterogenous arrays of processor cores depending on the vertical application,” he noted. “For application processors, the Arm versus x86 ISA battle has now been blown open by RISC-V. GPUs have been around for a long time, but many other types of math co-processors and accelerators are commonplace, especially now driven by the AI/ML revolution. Everyone’s designing their own variant of application and math cores optimized for their power, performance, and area needs.” Cadence refers to this phenomenon as domain-specific architectures (DSA).

Concerns about reliability are spreading to even non-critical devices, as well. “Every design can have a formal verification solution, including arithmetic-dominated designs,” said Ashish Darbari, CEO of Axiomise. “Arithmetic-dominated design are benefiting from advanced solutions from the largest formal verification tool providers, all of which have solutions that are catering toward arithmetic verification, especially on the floating point side. In the big scheme of things, this is not necessarily a major volume of work, but it’s significant and important enough for us to not look the other way.”

That has opened the door for formal to be used much earlier in the flow, where it can be used to head off design-related delays, and across a wider swath of the engineering community. But it also has created a talent shortage, because not everyone is well versed in writing assertions across a variety of different applications.

“Everyone with a reasonable name in the business in silicon, whether it’s a GPU or a processor company, or a fabric company, is using formal,” said Darbari. “All of the giants who are building hardware are using formal. And some of them want so much formal to be done, but they just haven’t got enough talent.”

Adoption rates vary, as well. “Some are very familiar and comfortable with formal verification, understand what it can do, and challenge it,” noted Siemens’ Giles. “Others are a little more hesitant, because it can feel sometimes like a science project, or an educational effort is required. When someone who is not familiar with formal first hears formal verification is recommended, their first thought is, ‘Now I’ve got to have a PhD on my staff. We’ve got to get our project going. How am I ever going to do this?’ And this leads to varying degrees of adoption.”

In some companies, formal has become just one of many required skills. This is a significant shift. In the early days of formal verification, it was assumed a small group of engineers would gain proficiency in using the tools.

“We realized that designers, verification engineers, engineers of all walks of life can make use of the technology,” said Synopsys’ Safarpour. “And it’s been up to the EDA tool developers to make it more intuitive for that user. We have different personas, different hats. Now, if you’re a designer, you may need a slightly different interface. You’re coming at it from a different angle than your formal black belt, and the bells and whistles that you’re looking for are quite different. As tool developers, we’ve also learned from our users who can get the most out of the tools. From that aspect, also, the bar has been lowered. Today, when I get queries, or talk to customers, there are as many non-experts that are making enhancements requests or giving us feedback about the tool than traditional formal experts.”

Emulation vs. formal vs. simulation
Other stalwarts of a verification flow include simulation and emulation. But for certain designs, every verification tool may not be necessary.

“There are some things that are well-suited for formal verification that aren’t as well suited for simulation, and vice versa,” Siemens’ Giles said. “A good example of this is connectivity checking. Going through the process of simulating that all of the interconnections, bit by bit, are correct is a fairly long serial process. If you can do it exhaustively in one run, then you can take those simulation cycles and use them for what simulation does best. So it’s a productivity play. Another thing that’s clear as the industry moves from a globalization model to a regionalization or nationalization model — and we’re having to replicate the ecosystem of silicon design, development, and production, multiple times over across the world — is that there are labor shortages predicted pretty much everywhere throughout the world. And this doesn’t have anything to do with design complexity. It has to do with the geopolitical and macro-economic realities of the world. This is creating opportunities everywhere. And the reality is, in order to succeed in that environment, we need more productivity out of our people. Using formal or efficient verification, freeing up simulation cycles to do what simulation does best, is a critical part of development now and in the future. So those two things — the productivity piece as well as answering the demands of what the world is putting on silicon today, and in terms of safety and assurance, trust and security — these are reasons why I consider formal verification to be absolutely required for any development.”

In some cases, more than one type of tool is required. Darbari points to an overlap based on coverage requirements, as well as the capabilities of different tools from different vendors. “There is still variation,” he said. “Debug is the most interesting one. Debug is where I can get to the root cause of failure in the shortest amount of time, and is also an area where we spent a lot of time to root cause things.”

There’s also an interplay between formal verification and simulation in order to reduce simulation cycles, with formal verification that are more costly to the engineering team.

“You can never do enough simulation, so that demand is boundless,” Safarpour said. “Everybody wants to do simulation, but they don’t have enough compute for it. However, simulation and formal verification technologies are quite complementary in nature, and we see that in multiple areas. In coverage, with simulation, you can very easily get to 80%, 85%, maybe 90% coverage, and then what happens is that you get to a plateau in the curve quickly. Then there’s that other 5%. Depending on what the criteria is to do the sign-off, there’s another 5% or so, and there’s so much randomization you can do that we still can’t hit those. That 5% happens to be the sweet spot for formal, because chances are that 5% is either things that are hard to hit, like corner cases, which formal is good at, or they’re dead code, meaning that no matter how much simulation you do you can’t hit it. This leaves you with a question mark, and then some engineer would have to step in to review that code and say, ‘I think that’s not reachable.’ So, it’s a manual intervention. The way we do it is our simulation and formal tools work off the same database and help each other out.”

The same scenario plays out for functional safety applications. “You can cover as much as you can for functional safety and fault tolerance, then formal steps in. It’s the same story with security, and this approach keeps replaying in different domains. Get the easy stuff done with simulation, get to that plateau, and then have formal do the last mile,” Safarpour explained.

Domain-specific architectures add their own unique issues. “In contrast to non-programmable ASICs, DSAs are different because you can’t predict the myriad ways a programmer will use the processor,” said Cadence’s Hardee. “Every eventuality must be covered, including the completely unforeseen. Formal is the only way. The processor technology leaders like Arm and Intel know this and have long invested in formal verification. Every company doing their own RISC-V implementation has to do the same. And the growth in math co-processors and AI/ML engines is driving newer formal techniques like C/C++ to RTL sequential equivalence checking.”

Formal’s limits
Like all EDA tools, however, formal does have some limitations. “Formal has the reputation of being something that’s better run on a module or block than a full SoC,” Siemens’ Giles noted. “Formal has challenges when it comes to long, sequential problems. A good example of this would be that it’s challenging to use formal to verify multiply or divide functions in a floating point unit. The best course of action is to let formal do what formal does best, and let simulation do what simulation does best. If you take a floating point design, you can verify everything, but the multiply and divide functionality is best done with simulation so you’re much more efficient in your use of the simulation functions, and you’re verifying the things that have to be done there. Then you don’t have to do a full floating point unit through simulation.”

As with all tools, it’s also an area where a lot of work is underway to overcome those limitations, both through abstractions and new research. “For now, it feeds into the message that there are some things that formal does well and some that simulation does well,” Giles said. “I would never see a scenario where formal completely eliminates simulation. Just like for years, I always wanted to stop doing gate-level simulation, but that never panned out. We’ll see that with formal and simulation, as well.”

Formal verification methods can be exhaustive and thorough, but should they be a requirement across the board?

Even before the 2014 acquisition by Cadence, Jasper pioneered broader proliferation through the introduction of formal apps – using model checking, often with automatically-generated assertions, to solve common verification problems in a more accessible way, requiring a much lower level of formal expertise, Hardee said.

Further, “Connectivity is a great example and one of the easiest to adopt – generate assertions from a connectivity map in a spreadsheet or IP-XACT form. The same can be done with simulation but it’s tedious. This formal method scales easily to chip-level since most of the underlying block functionality can be abstracted. In some cases, these apps can offload simulation workloads, and the value is in finding corner-case bugs since the random tool-generated input stimulus is more exhaustive than simulation testbenches usually are. While these apps are useful to get verification engineers started with formal, maybe acting as a ‘gateway,’ they’re the tip of the iceberg in terms of usage. Currently, we’re finding the bulk of the iceberg, including the majority of regression usage of formal verification, is in verifying the processor-based DSA designs with formal property verification, and sequential equivalence checking, both RTL-to-RTL and C/C++ to RTL,” he added. “As much as I’d like to set formal verification as a requirement across the board, there are still examples where it is necessary, and other examples where simulation still does a better job. But we’re seeing the design types where formal verification is necessary growing at a much faster rate.”

As for how all of this evolves, the EDA ecosystem appears to be on the cusp of identifying how formal verification can be used beyond traditional applications.

“There is a set of applications that are fairly common for any vendor selling bundled formal applications for specific end uses,” Giles concluded. “The challenges and opportunities from both technical and business perspectives include using formal in ways that to perform the verification that hasn’t been possible before, such as Trojan horse detection in hardware. Especially in this world of open-source hardware, it’s very nice to be able to prove that nothing is in that design that isn’t supposed to be in that design. That’s not typically a verification problem that’s solved by simulation, because simulation generally presumes you know what you’re looking for, and you’re just trying to make sure it does that. With Trojan horse detection, you’re now looking for things that are effectively out of the known state space. This is something that has a tremendous amount of promise.”

Related Reading
RISC-V Micro-Architectural Verification
Verifying a processor is much more than making sure the instructions work, but the industry is building from a limited knowledge base and few dedicated tools.
New Concepts Required For Security Verification
Why it’s so difficult to ensure that hardware works correctly and is capable of detecting vulnerabilities that may show up in the field.



Leave a Reply


(Note: This name will be displayed publicly)