When Is Verification Complete?

The answer depends on an increasing number of very complicated factors.


Deciding when verification is done is becoming a much more difficult decision, prompting verification teams to increasingly rely on metrics rather than just the tests listed in the verification plan.

This trend has been underway for the past couple of process nodes, but it takes time to spot trends and determine whether they are real or just aberrations. The Wilson Research Group conducts a functional verification study every two years to understand how verification teams approach this over time, and the results are consistent.

What is particularly noteworthy is the growing reliance on when code and functional coverage say the target has been achieved. In other words, metrics-driven verification continues to grow in importance, according to Harry Foster, chief verification scientist at Mentor, a Siemens Business.

Fig. 1: Signoff criteria for ASIC/IC projects. Source: Mentor Graphics/Wilson Research Group.

The same shift is happening in FPGA verification, only much more slowly, as shown in Fig. 2.

Fig. 2: FPGA signoff criteria. Source: Mentor Graphics/Wilson Research Group.

“The interesting takeaway this year was that more and more people are moving to metrics-driven approaches and they have to close coverage in order to achieve sign off,” Foster said. “Engineers like to think that successful functional verification is purely a science, and the reality is that it’s not. It’s a combination of art, science and project management skills. That’s kind of the human factor aspect. Part of the problem we have in this industry is that a lot of projects struggle with well-defined processes. In fact, there was an industry study that showed projects that focused on tools and technologies first, and then built processes around it, actually end up increasing the cost in the range of 6% to 9%. And they didn’t achieve productivity. Projects that focused on process first, and then put in place appropriate tools, ended up decreasing costs by 20% to 30%. A metrics-driven methodology is one aspect of the test planning process, and it’s something you have to define right up front. If it’s not done, then you are essentially approaching verification in an ad hoc fashion.”

A lot of successful companies have well-defined processes. What makes them successful may be the human factor.

“There are extremely great projects out there that have a clearly defined process, and in that process they include metrics which are fundamental to measure the success of the process,” Foster said. “Metrics-driven verification is only one aspect. There are different metrics you’re going to need anytime you define a process. However, why did some teams evolve more than others who end up being ad hoc? This is something that’s been studied in the software world going back to the late ’80s when the Capability Maturity Model was created to measure the maturity of a software project.”

The reason those teams succeeded was because they had the best people. The problem is that their methodology isn’t well-defined, and therefore not repeatable using other people. “The origin of thinking about processes goes back to the software world in the ’80s when we would send a rocket ship up, it would blow up, and we’d realize it was a software problem because the way the software was created was kind of ad hoc,” Foster explained.

From a high level of abstraction, verification is really about mitigating risk without infinite time and resources. That forces verification teams to weigh options such as what features of the design lend themselves better to simulation or formal verification or emulation or FPGA prototyping.

“This is not one-size-fits-all,” he noted. “Some things lend themselves better to certain approaches, and that comes back to the planning process. Then, in that planning process, I have to define metrics associated with each of those approaches to measure progress. The big win for any company is when they are thinking about what their planning process is going to be, have metrics in place, and have reviews on it. That is so fundamental. ‘What’ am I going to verify? The ‘how’ is easy. The ‘what’ is difficult.”

Too much data?
But what do engineers do with the results from various tools, and how can the data be analyzed to know where verification stands?

“We don’t try to exhaustively verify anything,” said Sonics CTO Drew Wingard. “Instead, we try to sample. We find the most bugs by generating random configurations of our hardware and then trying to automatically verify that. So we get to more corners more quickly by running what to other people would look like relatively shallow analysis on a relatively wide variety of configurations. That’s the approach we’ve taken over the past 20 years. In our work with formal technology, our biggest piece of work has been evaluating how to tune the inputs that go into the formal environment. We know a lot about the configuration that we are generating, how do we tune the inputs that go into the formal environment to take advantage of the knowledge that we have.”

That approach doesn’t work for everyone, though. “A number of years ago it was thought that designers should write formal verification assertions, self-documenting, self-specifying, what is the functional behavior they are trying to do,” said Pete Hardee, product management director in the System & Verification Group at Cadence. “It never happened. And it’s still not really happening. To get designers using more formal, it has to be almost completely automated.”

This automation can happen at a variety of levels, from static verification to automatic formal checks, to codifying what they’re doing and building up property libraries.

Some engineering teams want cookie-cutter libraries, and in the case of one that has a GPU challenge, there’s a set of properties they want to generate. So their formal experts perfect the libraries and start getting the designers to use it, Hardee said. This is the same as codifying common protocols — as in assertion-based verification IP — which the design team can then use.

But when it comes to deciding whether the design is ready to sign off, there must be some criteria. Sonics’ approach is to generate roughly 1,000 random SoCs a night and regress them using its own methods, which are relatively shallow. The company then delivers that same technology used in those regressions to customers, so when they have one configuration, that’s what they want to sign off on. Then they change the knobs on the verification.

Still, in building highly configurable interconnect logic, for example, there are really two coverage problems that have to be solved.

“One is the traditional simulation coverage problem of, ‘For an instance of your logic, have you fully covered it?'” said David Parry, COO of Oski Technology. “The other is, ‘how well have you covered the configuration space?,’ which you implement in a way that is beyond the capabilities of any standard tools; because you are not implementing your configurability purely in SystemVerilog using parameterization. You’ve got to do some higher-level software overlay that implements that configurability. And you’ve got to have a framework around evaluating [coverage of the configuration space] as you choose your random configurations or you fold back in customer configurations.”

What’s missing
Complicating matters is the lack of a specification, or a constantly evolving one, noted Sean Safarpour, CAE director for formal solutions at Synopsys. “Being in the formal world is so interesting because we talk about completeness and exhaustiveness and so on. But once there is a problem, you get your property table, you find out these are all proven. The customer just looks at that and says, ‘I want to get 100% proves,’ and they push us so hard. You do all of the work, take a step back, and you ask where the spec is. We’re spending so much time getting those last five properties to prove, then you find out there’s no spec. The effort is going in the wrong direction because the metric and the data is there but we are all engineers, we focus on that.”

Alongside this, Hardee believes coverage metrics should be measured after a certain maturity of the design due to the large amount of iteration of the design.

This isn’t always possible, and not everyone agrees this is necessary. “You don’t have to wait until you get the plan because you will never get the plan because you will never get the specs,” said Ashish Darbari, director of product management at OneSpin Solutions. “The designers’ work is to write some code. ‘I am a directed test person. Ashish came along and told me to write assertions. I love my assertions. Here is an assertion, run coverage. I’m actually covering 30% of this design with this assertion. Okay, another one discovers now I’m at 50%. Oh no, I actually now have a bug because this check I added exposed a bug in my design and now my coverage has gone down.’ Coverage needs to begin the first hour of the design window. If you don’t do that it won’t help.”

This has been the whole push toward “shift left,” where many pieces of the design are done in parallel rather than sequentially. In theory this works great, but in practice it isn’t always that clear-cut because some things happen iteratively, particularly when there are problems.

“In functional verification it is very difficult to say, ‘I’m done,’ because there really isn’t a good model to predict that we are done,” said , CEO of Real Intent. “You can try to make the process more efficient based upon experience, and we can take experience away from it.”

And that doesn’t impact the goal of verification, which ultimately is risk mitigation. “Complete verification is a myth,” said Synopsys’ Safarpour. “We’ve had one of these apps where people have been signing off blocks with nothing but formal for the last few years. About 30% of faults don’t get detected, and they said, ‘Hmm, we’re going to sign off on it anyway.’ They have the experience that the block has been implemented for a few years. Even if you show them all the data, they know that to go and uncover that they would need another four months.”

Narain had a similar perspective: “At the end of the day, it’s about resources and schedule. You can only do such a good job in that amount of time with that many resources, and you will have corners, and you’ll take risks — and it’s never perfect.”

And for some applications, it’s about the best confidence that you can get in a given amount of time. But safety-critical applications, in markets such as automotive and medical electronics, are a totally different story.

“You’ve got to have the traceability, you’ve got to have a more complete idea of when you’re done,” said Hardee. “We define verification sign-off as no checkers are failing while reaching a defined level of coverage. And in order to show that statement, it’s never as good as fully proving that statement. It is always confidence in that statement. But it’s then just what level of confidence I need. That’s an application-based decision. And the critical applications, the requirements for those are a lot tougher. If you’re trying to meet ISO 26262, you need the traceability, you need the fault injection, you need to be able to do propagation analysis on those faults. You need to know absolutely if those faults ever propagate to functional outputs, will those faults always be detected by the checkers?”

The good news is that the data suggests the industry as a whole is maturing and recognizing the importance of code coverage, functional coverage or other metrics in order to measure the progress of where we are at in verification, Foster said. “The key is that we have to define that so metrics don’t happen ad hoc. You have to plan this. The fact that more companies are using these metrics-driven approaches for sign-off indicates they are giving it a lot of thought right up front for planning.”

On the tools front, things seems to be in good shape for IP functional verification, he noted. “The methodologies are pretty well-defined. We have the UVM standard. We have the ability through UVM to actually go out and acquire external IP and plug it in relatively easily. Prior to having a standard it was extremely difficult. So we’re in pretty good shape there.”

The next challenges to tackle are at the system level, and integration level of the SoC, where traditional metrics for IP typically don’t work so well. “For example we use code coverage and functional coverage at the IP level,” Foster said. “Those are less effective at the system level. Part of the reason is that coverage at that level is more statistical in nature, so there are opportunities to develop new solutions focused more on the system and the SoC integration level.”

Related Stories
How Much Verification Is Necessary?
Sorting out issues about which tool to use when is a necessary first step—and something of an art.
Verification Unification
Experts at the Table, part 3: Power, safety and security—and how Portable Stimulus and formal can help with all of these.
Verification In The Cloud
Is the semiconductor industry finally ready for EDA as a service?
Verification And The IoT
Experts at the Table, part 3: Shifting left, extending right; using machine learning and data mining to find new bugs and open up new usage options.

Leave a Reply

(Note: This name will be displayed publicly)