Shifts In Verification

A number of tools are now required to provide sufficient coverage, and design teams are rethinking what to use and when to use it.

popularity

By Ann Steffora Mutschler

Verifying an SoC requires a holistic view of the system, and engineering teams use a number of tools to reach a high degree of confidence in the coverage. But how and when to use those tools is in flux as engineering teams wrestle with increasing complexity at every level of the design, and a skyrocketing increase in the challenge of verifying it.

There are no hard and fast rules here. Each design is different, and the tools needed for that design vary. But what is clear is that how the various verification techniques are applied is shifting as companies experiment with better ways to bolster confidence in their design.

“Verification always was, is and remains a totally unbounded problem so you will never be done,” said Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence. “On top of it, in the time of integration of lots of pre-defined IP, you want to make sure that those components you’re integrating are really fully verified otherwise the potential error causes multiply. If it’s 0.9 at one, if you have 10 of them, well then you have 0.9 times 0.9 10 times and suddenly the probability of the whole thing still working gets really low.”

The earlier engineering teams get into the verification of the different sub-blocks, the subsystem and the SoC itself, the higher their level of confidence. That accounts for the increasing use of static verification techniques prior to simulation—the traditional heavy-lifter in a verification methodology.

“Static techniques definitely do help to make the job of the verification engineer easier by making it more robust before integration,” Schirrmeister said. “They are definitely important, but I haven’t personally seen them at the full system level. You need both, really, and verification is one of those problems that is kind of like parenting—you need all the help you can help. If something is better verified at the block level because you are getting it into a more verified state earlier, then hallelujah, that’s all great.”

Bounding the problem comes down to a combination of higher levels of abstraction and approaching the issues one level at a time, he added. “You make sure that your IP is clear and proper. You make sure that your subsystem is clear and proper, and then when it comes to the system level, you have different levels of verification. You verify that things are integrated correctly. You have already verified that the individual components you are integrating are largely done, so that hopefully helps. On top of it, you build scenarios, and not all scenarios will run at all levels of details. It’s all hierarchical divide-and-conquer. At the highest level you have scenarios which then unfold into the lower levels into individual verification tasks.”

David Hsu, director of product marketing for static and low-power verification at Synopsys, essentially agreed. “You need to use the right tool for the right job. But right now the job is changing. We’re looking at this world where you have these incredibly large and complex designs, which hasn’t changed. Things are getting bigger all the time. But in terms of what makes up these designs, a mobile SoC doesn’t look very much like a design we were dealing with even a few years ago. From that perspective, a more heterogeneous but well-correlated and well-integrated verification strategy is hugely important compared with something that was a little bit more monolithic a few years ago.”

He noted that simulation is hugely powerful. “There’s not any constraint on design size or complexity. You just have to be able to afford the time and a big machine…and a lot of customers do that because it is a perfect application for a bunch of requirements. If you need to do a gate-level simulation to validate your interface and all those kinds of top-level performance issues, that’s the tool you want to be using.”

However, Pranav Ashar, chief technology officer at Real Intent, asserted that there are some big changes happening in the SoC design world today, which require verification to be looked at differently. “The SoC design flow and processes and paradigm are getting better understood.” He said this was underscored in the keynote that Wally Rhines gave recently at DVCon, where one of his major points was that people are starting to understand what it takes to assemble IP into an SoC. “The corollary of that is that the steps along the way that need verification and oversight are becoming part of the process.”

As a result of having a much better understanding of the SoC design process, there is also a much better understanding of the SoC verification process, Ashar said. “What is becoming clear is that in this verification process—basically what we think we need to verify in the SoC—pushing the envelope of verification is not on the verification of the block internals as much as it is on how the block is integrated. It’s how the whole chip at the system level performs. It’s about integration and system-level issues rather than block internals. A different way of looking at it is that the complexity of an SoC is not as much in the functionality of the individual blocks but the layers on top of it. These layers come in a number of different flavors, and companies like ours are trying to address the verification of the SoC in the context of every one of these layers of complexity that gets added on top. It’s turning out that a lot of the verification obligations are understood at a much deeper level that enables the use of static techniques rather than having to use simulation.”

He looks at the verification process as consisting of three big buckets: the specification of what needs to be verified, the analysis in the context of what is being verified, and the debug.

“Generally, simulation is a sort of fallback technique. When one of these three buckets: either the specification or the analysis or the debug is fuzzy or incomplete or there’s something that doesn’t compute in those, then you use simulation. Simulation says, ‘I’m going to apply these vectors, I’m going to see what happens,’ and it always turns into something that I can act on. If I get a trace that maps onto the design, I can eyeball it,” Ashar explained.

In the context of SoC with the additional layers of complexity, the verification obligation is understood at a deep enough level that for a lot of them, simulation is a really distant approach and is a true fallback option. “You can do a lot with a static approach to these verification environments,” he said.

At the end of the day, none of this can stand on its own, Synopsys’ Hsu said. “It’s all nice to have when you do that, but then you end up with a little niche. The important thing is that it does have to fit back into the methodology – there has to be a unified view of coverage; everything has to be correlated into what the metrics are.”

He believes that over time, it will become crystal clear where static analysis is absolutely mandatory and the adoption both in implementation and verification flows is going to grow very fast.

“Just using static technologies is a no brainer. What has been limiting is that there are performance issues related to the tools themselves and those will get addressed. Being able to have the static tools provide a view of the design that fits in concert with what downstream tools are going to say. For example, if you can avoid doing something in dynamic simulation because it’s been completely covered in static verification, on the same coverage database or using the same assertions, then the customer has a probable level of confidence that they don’t have to go and spend the time to go and create a testbench to go after that part of the verification challenge,” Hsu concluded.



Leave a Reply


(Note: This name will be displayed publicly)