How Much Verification Is Necessary?

Sorting out issues about which tool to use when is a necessary first step—and something of an art.

popularity

Since the advent of IC design flows, starting with RTL descriptions in languages like Verilog or VHDL, project teams have struggled with how much verification can and should be performed by the original RTL developers.

Constrained-random methods based on high-level languages such as e or SystemVerilog further cemented the role of the verification specialist. Then the introduction of assertion or property-based languages tried to make verification more accessible to designers, with the promise of providing self-documenting code that captured and verified design intent. But in the mainstream, these also became the domain of verification specialists.

Complicating matters, designs are getting more complex and expensive. That has fostered increased use of commercial IP, as well as internal IP that is designed to be used with multiple chips. But all of this has to work together, and as new markets open up for electronics in automotive, industrial and medical it also has to work flawlessly for longer periods of time.

This has led to an ongoing discussion on just how much verification needs to be done, when to do it, and how to do it more effectively. Part of this is methodology– or process-driven, which basically pushes more of the verification and debug left at just the right time. Part of it is technology-driven, as well. For complex designs, there is almost universal recognition that four different engines are now required—formal, simulation, emulation, FPGA prototyping—as well as various methods of static verification.

“A lot of verification is actually process oriented,” said Ashish Darbari, director of product management at OneSpin Solutions. “It’s about putting the right technology, the right resources, the right stage of the project, and obviously getting the right people involved. The challenges of technology in terms of design are huge. If you thought initially it was Intel’s high performance computing, then low-power came along, and power became the over-arching requirement for design. Now you have safety and security, while power and performance are still there. So the requirements are only increasing.”

And this is where things get really complicated. “The verification problem is so huge and multi-faceted it’s about using the right engine, at the right time, for the right task,” said Pete Hardee, product management director in the System & Verification Group at Cadence. “It’s also about dividing and conquering this multi-faceted problem such that you can sign off on IP-level verification, then integrating that to the next level up—subsystem, chip-level, system — and having the right engine to be able to handle the increased complexity there. Then you need to step up to be able to verify a software stack is interacting correctly with the hardware. So it’s about using the right engine, having the range of those engines from formal through to simulation through to emulation through to some kind of intelligent prototyping system that allows us to more thoroughly verify the software.”

Along the way, there’s a lot of repeat verification that is done, which is inefficient. Hardee believes that machine learning could help with regression management and making regressions more efficient.

“This is one area where formal verification technology is improving rapidly,” he said. “We explore properties using a certain range of engines first time around, and then we can use machine learning techniques to do a much better job of that the next time around. So as you move into regression, the verification should get a lot more optimized.”

There’s no reason why this type of technology cannot be applied for other verification engines as well. But choosing the right tool for the right problem is critical, and it’s something that only a few companies have mastered, said Sean Safarpour, CAE director for formal solutions at Synopsys.

“There are still a lot of folks out there in the main part of the bell curve that are still stuck with that,” Safarpour said. “Simulation people just try to push everything through simulation. They learn the hard way, when time is running out. They look at their coverage data and realize, for example, that they are only at 30%. You look at something like formal, and in terms of the step jump, machine learning [can help]. We’re using it on formal heavily, but we are also doing it other places — test grading, root cause analysis, finding the closest to the original root of the problem. There’s a lot of data that we can parse through. That’s going to make you more efficient, and make the tools a little bit more efficient, but it’s not that step function that we’re looking for. It’s just going to make us use those tools better, and individual engineers will do better. But they’ll still have lots of work to do.”


Fig. 1: SoC ecosystem. Verification consistently has been the most time-consuming problem in this circle. Source: ARM

Identifying the problem
All of this assumes that engineers understand what problems need to be addressed. That’s easier said than done.

“If you asked me what are the indispensable pieces of verification methodology today, I would say simulation, emulation and formal,” said , CEO of Real Intent. “If you ask people to pick and choose, they will first pick the first two, and then pick the rest of the items. But because there is so much inefficiency there, those are also places where machine learning and techniques like that, which are relatively imprecise techniques, can improve efficiency. I don’t believe they will ever be able to improve the efficiency of formal verification because that needs to have a precise, complete solution. Wherever you can have failures, you can’t let failures through at all, and machine learning is not going to do it. Machine learning works most effectively in situations where there is a large volume of data, and I don’t know how to make sense of it. If I can improve the quality of results by 10%, that’s a big deal. So when there is a lot of inefficiency, that’s when machine learning works very well. You’ve heard about regression management, and things like that — yes — but not where precision is required.”

Regardless, formal is seeing a surge in adoption because it is the best way to zero in on specific problems. Static signoff, which has been used in static timing analysis, is growing, as well, Narain said. “Before that we did dynamic timing analysis, and static timing analysis dramatically improved the efficiency. It is a predictable methodology to accomplish the signoff of a certain failure mode. An example of this is clock domain crossing, which is really not covered in simulation or by other techniques. You really need an independent method to do it. In these 500 million-gate designs people are doing, how do you sign off? That’s a totally different class of problems. Static signoff is a new model that has a lot of promise, and all of these are about improving efficiency.”

Part of the problem is also making sure there are enough resources required to do the verification at the time when those resources are required. Even for large companies, this is a growing problem because rising complexity requires a jump in compute cycles at just the right time. For smaller companies, this may be the trigger necessary for scalable, cloud-based verification. In the past, chipmakers have been reluctant to use anything except internal clouds for verification, but that is expected to change as new markets for chips develop.

“Smaller and midsize companies need emulation,” said Krzysztof Szczur, hardware verification products manager at Aldec. “This is more about flexibility and cost. You don’t have to maintain it, and if you need five emulators, you can quickly scale up.”

This also allows companies to move emulation further left into the design phase, Szczur said. “It allows you do see different partitions of a design, so you can see ‘what if’ and show changes in the interconnect and other resources. You also can assign timing constraints and serially manage connections.”

Design side concerns
The design side is constantly worried about how much work is being created for the verification teams, observed Drew Wingard, CTO of Sonics, and it is equally fascinated by other domains that don’t have verification engineers. “It’s been impossible to watch how software has changed over the past 15 years, and all the emphasis on doing things in a more agile fashion. From the design side we’re very interested in the techniques we can borrow from there.”

One of the techniques he sees as mapping well onto hardware is test-driven design. “We’re trying to move in models where we can bring in pieces that the verification side has developed that can help us design better the first time, so we always have that test harness,” Wingard said. “Any designer who would put something into our database not knowing if it basically works is just setting up for failure. Designers are always doing some amount of local verification to convince themselves they haven’t done everything wrong. Our intent is to bring up the level of abstraction where they can do more of that. They can have pieces that basically work so that when we hand them off to the experts, they are independently verifying some higher level spec. The main benefit of having the independent verification person is the fact that they are independent. It’s another set of eyes that the software community tends to deal with using the design review process. They tend to do either co-design or design review process. Here we have an independent engineer who tries to implement the same function another way, and the verification environment is a way of checking that.”

Wingard said the most interesting aspect of formal verification is that it allows design teams to take techniques they already want to use — like assertions — and find a way of re-using those differently. “Something that’s incredibly valuable to us from a simulation perspective — this block isn’t going to work if this input is in this state — can be driven into a formal engine to make an assertion. The work that’s been done in the formal community to take advantage of techniques that the hardware guys already want to use is really important to the design side. Maybe it’s simple, but it’s an incredibly valuable transition. We need to continue to look for those opportunities because I want my designers to be building up these harnesses so that they’re not just communicating with the verification team using some abstract spec. Instead, they’re actually handing them an object that’s a starting point for what they’re continuing to work on. It’s not just the guts of the hardware. It’s the environment around the hardware that can then be used. That’s also the start for all of the regression work that happens.”

The bigger picture
What’s important is not just how to make verification more efficient and more effective — it’s how to make the entire development cycle more efficient and more effective, asserted David Parry, COO of Oski Technology.

“That’s really about three things,” Parry said. “One is using the right tool for the right part of the process and the right part of the design—by the right part of the team. The second is rationalizing, reconciling and coordinating all of those different design and verification tools that are being used. The third is this ‘shift left’ that everybody talks about, finding your bugs earlier in the process and then re-using what you develop earlier in the process further down into the verification process.”

Parry doesn’t believe there will be a sudden step function in any verification tool — whether it’s formal tools, simulators, performance or software simulation, emulators, rapid prototyping. “All of those things are fairly mature technologies at this point, and all of them are making continuous, incremental improvement, but none of it is going to operate any faster than the technology is advancing. They are just keeping up. Where we will see improvements is where they’re needed. That’s in tools, whether it is machine learning or methodologies to better select what verification tool to use for what part of the design, and in the ability to get those tools applied earlier in the design cycle.”

As far as how much verification a designer should be doing, Parry believes that in an ideal world, all of the block-level verification—basic functional verification of what traditionally would have been directed test—should be done by the designer and it should be formal.

“What we have seen with many of our customers is when they have designers that are motivated and interested and capable of using formal for design bring-up, it creates a wonderful synergistic environment between the design team and the verification team,” he said. “The designer is able to basically find all of the obvious easy bugs and hand something off to the verification team, which can use that at a higher subsystem level as well as at a block level.”

However, OneSpin’s Darbari argued that none of this—static signoff, formal, simulation, emulation, or FPGA prototyping—will solve the problems unless closure is a priority and unless there is a link to the verification plan.

“The problem that is actually confounding design and verification is, regardless of the technology that you use, if you actually have no ability to assess when you are done then you can’t actually make that judgment other than saying, ‘This is the date, and we will ship it, and we will check it out,'” Darbari said. “My contention is that having been in the user space for verification, and being a user of many different formal verification tools, the thing I’m noticing here is that never mind what formal tool you use, what simulator you use — you should be able to integrate the results of all of these technologies into a universal view. Up to the point we’re able to do that, unless you can actually tie down the results of verification, starting from day one from the designer bring up, up to the point of closing the verification on the emulation, you cannot actually say what you did and what you didn’t do. You collect all of this data — so many different models — who should decide what makes sense of all of these different metrics?”

Related Stories
Verification Unification
Experts at the Table, part 3: Power, safety and security—and how Portable Stimulus and formal can help with all of these.
Verification In The Cloud
Is the semiconductor industry finally ready for EDA as a service?
Verification And The IoT
Experts at the Table, part 3: Shifting left, extending right; using machine learning and data mining to find new bugs and open up new usage options.



Leave a Reply


(Note: This name will be displayed publicly)