Is Verification Falling Behind?

It’s becoming harder for tools and methodologies to keep up with increasing design complexity. How to prevent your design from being compromised.

popularity

Every year that is in effect means that the verification task gets larger and more complex. At one extreme, verification complexity increases at the square of design complexity, but that assumes that every state in the design is usable and unique. On the other hand, verification has not had the luxury that comes with design reuse between designs and across generations of a design. In addition, the verification team now has to expand beyond the task of functional verification to include low-power verification, performance, safety, and aspects of security.

However you view the problem, verification complexity is approximately doubling every year. And while verification teams are increasing in size to combat the issue (see Figure 1), they also have to rely on increasing productivity to stay at the same level of confidence for a chip going to tapeout. According to Harry Foster, chief verification scientist at Mentor, a Siemens Business, “design engineers spend a significant amount of their time in verification too. In 2016, design engineers spent, on average, 53% of their time involved in design activities, and 47% of their time in verification.”


Fig. 1: Mean number of peak engineers per ASIC/IC project. Source: Wilson Research Group/Mentor, a Siemens Business.

There are several strategies that are used for improving verification productivity. Each of these tackles one or more of the areas in which verification engineers spend their time. (See Fig. 2.) One strategy is to find bugs earlier in the flow and thus diminish the time spent in debug. Another is to make the creation of tests simpler. A third strategy is to improve the engines or the ways in which engines can be used to solve problems. Most engineering teams employ aspects of all of these strategies. Foster also notes that the percentage of time spent on each task has not changed significantly since they have been doing the survey.


Fig. 2: Where ASIC/IC verification engineers spend their time. Source: Wilson Research Group/Mentor, a Siemens Business.

It was very much expected that verification productivity this year would have been aided by the release of the Portable Stimulus Standard from getentity id=”22028″ e_name=”Accellera”]. While this did not happen, increased attention is being applied in this area and users are beginning to adopt solutions even ahead of the standard being released.

Shifting left
The best strategy may be to ensure bugs are not in the design in the first place. “A shift left methodology provides faster time to market,” says Kiran Vittal, product marketing director for verification at Synopsys. “In a traditional design and verification flow, you write test vectors and do simulation, emulation or prototyping, but you can do things earlier in the design cycle and catch bugs earlier. Static and formal allows you to catch bugs without having to write test vectors.”

, president and CEO of OneSpin Solutions is in full agreement. “Formal already provides a major contribution to the shift left paradigm. Here, the verification intent ranges from implementation issues (implied intent) to thorough block-level functional coverage (specification intent). Portable Stimulus will also address the coverage issue from a systems perspective where it becomes increasingly harder to capture design and verification intent in a machine-readable and humanly comprehensible way. It remains to be seen how the intent gap between formal at the block level and portable stimulus at the system level will be covered.”

Strides are being made in static verification, as well. “We are adding functional checks beyond structural checks to lint types of tools,” says Pete Hardee, product management director at Cadence. “With lint I am checking the structure of the code, the syntax etc., but now we are expanding that to livelock and deadlock checks for Finite State Machines, automating the addition of fairness constraints etc. This makes it more available to the designers.”

One of the biggest productivity gains with shift left comes from a reduced iteration loop. “If you catch something downstream, you may have to go back up in the flow and repeat a number of steps,” says Vaishnav Gorur, product marketing manager for verification at Synopsys. “Then you find another problem. So the iterations are reduced in number and complexity when verification is performed closer to the design.”

Choosing the right engine
There are certain tasks where it is now accepted that formal can do a better job. “clock domain crossing (CDC) is not caught by design reviews or functional verification or static timing verification,” says Vittal. “It requires a separate solution for that problem. In the past couple of years, as designs integrate lots of IP, they all have asynchronous resets as well. So a new problem was introduced – reset domain crossing and tools are needed to address these types of issues.”

There continues to be issues deciding which blocks would be better verified with formal techniques rather than dynamic verification. “There is significant customer evidence that they want to choose blocks that they can completely sign off with formal,” says Hardee. “A big factor in this is the state space. If something has an excessively deep state space, then chances are that they will go towards simulation rather than formal. But because of scalability improvements, formal can do a lot more than it used to be able to target. Typically, formal would have been applied to control oriented blocks and possibly some data transport, but not transformation. Now we can cope with the simpler end of data transformation in terms or arithmetic datapath. What you can apply formal to has expanded a lot.”
Choosing the right abstraction
Abstraction is becoming a lot more important. “You need a methodology that relies on hierarchy,” says Vittal. “You verify the IP first and then bring in an abstract model of the IP to do SoC verification. You do not want to solve the same problem over and over again. To address capacity issues and long run times associated with verifying the SoC, you can create an abstract model and now you can handle much larger designs. At the SoC level you are only looking for interface issues between the IP.”

The use of abstraction becomes even more important when software becomes an integral aspect of the system. “People want information faster and they are willing to give up some accuracy,” says Frank Schirrmeister, senior group director for product management at Cadence. “We have power extended into the front-end, where you are really just looking at the toggle information and from that derive where the hot-spots are. People want this to be faster and available earlier, and they are willing to compromise on accuracy. It is meant for software developers so they can look at how their software executes in the context of relative changes. Other people are not willing to risk using abstraction, and instead want more accuracy. The whole multi-domain execution comes in here, as well. They want the interconnect in a system to be accurate. Otherwise, you may miss some of the arbitration effects or aspects of cache coherence. But then you run that on emulation to get enough data.”

Improving engines

Vendors keep working on improvements to both the core engines and the surrounding software. “We have made a lot of progress in the area of verification reuse,” adds Schirrmeister. “In the past, verification was fragmented. The first axis in helping people is reusing verification such that you don’t have to rebuild the verification environment when migrating between different engines. The objective is to be able to move from engine to engine as fast as possible and to be able to get more visibility for debug as required.”

Debug is the biggest consumers of time. “Debug is a horizontal that goes across all verification technologies,” says Synopsys’ Gorur. “Having a common debug platform reduces the learning curve. That improves engineer’s productivity and reduces turn-around time.”

As designs grow larger there is a danger that violation noise can become a problem, as well. “You could end up with lots of issues or violations,” says Vittal. “This means that you need tools that are smart enough to not just list the issues, but to identify the root cause of the issue—such that if you fix that, then many other violations will disappear, as well.”

The increased complexity of testbenches and the usage of UVM also create another problem. “Debugging test benches is a monumental task,” says Gorur. “The verification team has to look at all failures and figure out if each is a test issue or design issue before it can be handed off.”

The debugging of the testbench is very different than design debugging. “We have enabled a new use model for interactive testbench debug which looks more like a software debug approach,” adds Gorur. “Now you can step through code to identify where things are failing, the current state of all of variables. This is different from the traditional debug approach where you run a large simulation and dump all of the signals into a database, and then analyze it post-simulation. We also enable the engineer to step both forward and backward, and that saves a lot of time on iterations.”

Tool improvements can be in several ways. “There are three things that impact formal scalability,” explains Hardee, “algorithm improvement, parallelism and also the application of machine learning in terms of learning about which algorithm or engine is best for which problems. This is engine orchestration. The combination of algorithms in the engines, the combination of abstractions that can be used – the engine choice – applying machine learning to that problem so that we know which engines are making progress on the design. And then, for regressions, we can make more rapid progress than you did the first time around.”

Other aspects of verification reuse are becoming possible at the system level. “An SoC can be configured in ‘n’ different ways, and I need to efficiently verify all of the configurations,” he says. “If you use a lot of VIP in conjunction with the IP, you need to integrate them together to form a system-level testbench. We have enabled the automation of testbench construction based on the design and the VIPs, and that provides a quick way to get to first test. Time to first test used to be weeks to months. It can be reduced to hours, or less.”

New requirements
Verification teams are now tasked with more than just functional correctness. Performance requires a different approach. “You need to make sure that latency and bandwidth goals are met,” says Gorur. “We have automated the generation of performance stimuli to drive these tests as well as the analysis and debug of them. So if you want to verification the latency from this master to this slave, the system will extract that information and graph it for you. Then you can go into debug if problems are found and identify the root cause.”

Safety is another area of increasing concern. “High-reliability applications have resulted in new applications for formal tools based on fault injection,” says Brinkmann. “These applications range from functional verification of safety mechanisms to formal diagnostic coverage analysis at the SoC level. In a safety setting, rigorous functional verification, including thorough verification planning and tracking, has become mandatory, and formal combined with objective quantitative coverage measures plays out its full strength.”

Formal can help in several ways. “You have a fault list that is statistical,” explains Hardee. “You cannot inject every possible fault in every node, so it has to be reduced. You get the statistically generated list of faults, which can be reduced further through testability analysis. This saves fault simulation time and means you do not waste time on faults that can be proven to be untestable. More importantly, formal can do post-fault simulation analysis. There is a four-quadrant diagram, where on one axis we have, ‘Did my checkers find the fault?’ and the other axis is, ‘Does the fault propagate to a functional output?’ and thus become potentially dangerous. The worst category of fault is the one that can do damage and is not picked up by the checker.”

Hardee explains that there are dangers with this when associated with simulation. “Because the testbench is not exhaustive, you may have been lucky that something didn’t propagate to an output. Did I get lucky that the checker caught it? For some faults you want to do a deeper analysis and you may want to prove, using formal, if there is a possibility that I didn’t find something with the testbench. Is there any path by which this fault can propagate to a functional output? Similarly, I want to be able to prove, ‘Does a checker always find this fault?’ or if there a circumstance where the checker misses it.”

Coverage closure
So when are you done with verification? This is the proverbial nightmare of all development teams, and it places a lot of pressure on coverage. “You can get up to 80% coverage with simulation fairly quickly,” says Vittal. “Then you spend a lot of time getting it higher. Within the last 20% there is 5% or 10% of the design that is unreachable. Formal technology used up front can weed out that unreachable space and the design team now doesn’t have to target them.”

But formal coverage and simulation coverage are not the same. “When using formal, they want to prove completely, and that is an extremely high bar compared to simulation,” Hardee says. “Simulation confidence does not prove completely. I want to ensure that my testbench produces no further failures and I want to measure the coverage to provide some confidence that the testbench has checked everywhere. That is a generic definition of verification closure.”

The industry is spending a lot of effort finding ways in which formal effort can impact coverage closure. “Consider bug hunting,” Hardee says. “I am not looking for proofs. I am looking to disprove. I am looking for counter examples. I am finding bugs that simulation could not find. Now go back to the definition of signoff where we want no more failures with a defined level of coverage. The fact that I can find no more counter examples – is that good enough? No, I need to know where I have been and looked. One innovation this year is deriving coverage from bug hunting.”

Conclusion
The verification team has to constantly innovate and find new and more efficient ways to get to coverage closure faster. There is no single way to do that and all aspects of the methodology and tool usage have to be considered.

Abstraction is gaining traction, especially where software is involved. When Portable Stimulus gets released, it should provide an industry-wide framework around which a lot of test reuse, and consolidation of verification efforts will be enhanced. It also may provide the biggest productivity boost the industry has seen in terms of verification intent definition and test synthesis. However, the industry cannot afford to put all of its eggs into one basket, and effort continues to be expended across the whole range of tools, models and flows. Those who stick with what they know are likely to be left behind and increase the danger of bug escapes.



1 comments

Kev says:

>> “clock domain crossing
(CDC) is not caught by design reviews or functional verification or
static timing verification,” says Vittal. “It requires a separate
solution for that problem.

Untrue, you can catch it in functional test with this technique (at the same time as doing power) –

http://www.v-ms.com/ICCAD-2014.pdf

Verification is falling behind because the techniques and methodology are over a decade old.

Leave a Reply


(Note: This name will be displayed publicly)