Experts At The Table: Verification Strategies

Second of three parts: Different applications for tools; who’s doing the verification; automated assertions; the role of UVM; EDA opportunities and challenges; how things are really done.

popularity

By Ed Sperling
System-Level Design sat down to discuss verification strategies and changes with Harry Foster, chief verification scientist at Mentor Graphics: Janick Bergeron, verification fellow at Synopsys; Pranav Ashar, CTO at Real Intent; Tom Anderson, vice president of marketing at Breker Verification Systems; and Raik Brinkmann, president and CEO of OneSpin Solutions. What follows are excerpts of that discussion.

SLD: As complexity increases, how do we determine what is functionally correct, and why have some tools caught on while others haven’t?
Foster: One of the reasons statistical approaches didn’t catch on is that we were able to extract out details that fell within a range that was acceptable.
Ashar: What’s happening is we’re breaking down barriers. The same thing is happening with asynchronous interfaces on chip. You cannot verify that with functional simulation or static timing analysis. It’s the intersection of those two spaces. You need to orthogonalize in different spaces. It’s about bringing together everything available to solve problems. Static techniques get a lot of play—structural analyses and timing techniques. They feed on each other and on simulation. In one sense we have lost some orthogonalization, but it has forced us to look at partitioning along other dimensions.
Anderson: When you think about the division of verification into multiple domains, or orthogonal spaces, you have to identify different people who might contribute to that. One aspect is you give a piece of the verification process to the designers. I’m not sure we’ve been successful at that, however. For years we tried to get designers to buy into formal and write assertions. They can do a lot of verification through these technologies before setting up a testbench, and architects can verify a lot of things—major pieces of functionality and performance—with the system-level model before they started generating RTL. Sometimes I feel like we’ve backslid on that.
Ashar: We’ve seen some progress on the system level because there’s a clear benefit to them. On the designer level, it’s really hard to tell designers there is a benefit because they can throw it over the wall to someone else. That’s not going to change.
Anderson: But it’s harder for you to fix a bug that gets thrown back over the wall to you. Now you’ve got to figure out what went wrong in a 300 million-gate design. That was always the argument, at least.
Foster: There has been an interesting change there. The threshold of time spent by designers in design and verification has crossed. Designers are now spending more time in verification than design—and that’s independent of the verification team. There has been a shift.
Bergeron: Is there a breakdown in where that effort is spent? Are they using formal? Or are they doing their own ad hoc testbenches?
Foster: It’s a combination of sandbox testing and debug. There isn’t just one thing. But concerning formal, historically we haven’t been able to get designers to buy into that. But that has changed recently. If you talk to designers about pure formal property checking, that’s still correct. But there is a move toward automatic formal verification where, quite often, the guy didn’t even know there was a formal engine under the hood. That has really taken off.
Ashar: ABV is a technology, not a solution. It’s an enabler. For the designer, you’re not writing the ABV assertions, but you are crystallizing what the verification obligations are. That’s happening everywhere. Companies in this room are becoming verification solutions providers rather than formal or simulation tools providers. We are hearing from our customers that because static techniques are getting a lot of play, a lot of the verification is being done by designers before the design is handed over to the hardcore verification people. Anything that can be done statically hardens the design.
Brinkmann: There are a lot of standardized tasks in verification. The register verification is another example. You can come up with a solution that’s almost push-button, but it only solves part of the problem. If you’re creating a large system, this covers maybe 20% of the functionality. For the larger part of your functionality you still have to decide whether to use UVM or ABV or some other technology. There is lots of room for improvement. The question is how you prove the benefit to people.
Foster: The question is whether the solution is more painful than the problem, or whether the problem is more painful than the solution. That’s what makes people change.
Anderson: The phrase UVM or ABV bothers me. Assertions have always been essential to doing a good job in any constrained random flow. It’s one of the best metrics for finding what’s going on in your design.
Bergeron: And it’s one of the easiest ways to find errors. One of the things that was mentioned earlier is that UVM seems to be equated with constrained random verification. That’s what it was designed for, but you can still use the UVM infrastructure to do directed tests and things that are more relevant at the system level.

SLD: As tools get applied to new things, are we using them as effectively as we did in the past? And on a broader note, are we keeping up with complexity in new designs?
Foster: That’s where a lot of real opportunity lies. A lot of people put together a branded UVM environment. This past year, I was invited to do a verification assessment at a large company. It was a process methodology focus rather than a tools focus. We got into the part about coverage modeling. They were very proud of the fact that they were doing functional coverage on a multi-million gate design. They wrote 12 cover points. People think they’re going to adopt these technologies, but they haven’t effectively adopted them. That’s where the opportunity is.
Bergeron: One of the big challenges in adopting technologies is they want to bring some of the checks to new features, but to do that they need to add a little more of the intent behind it. We can infer some stuff, but it’s not part of the intent. What often happens is they can back-fit a limited intent description and they end up shackling themselves because they don’t want to invest in the effort of what they really meant. They’ve dumbed down the technology they had before.
Ashar: That is going to be one of the non-championed benefits of SystemVerilog. It allows you to have higher-level data structures in design, and the more you have the easier it is. We’re now limited by the functional intent from RTL. The higher-level intent is being obfuscated. The more metadata structures you can bring into design, the more you can develop these automatic techniques. The more you can articulate to the verification engineer that this is what a certain block is supposed to do, the better the verification process works. Having higher-level structures in the design process allows the verification engineer to see this is what you intended.

SLD: We have a lot more unknowns in design than in the past. IP is a black box that isn’t always used as planned. Software is new. Use models vary. Does verification become more challenging because of that?
Anderson: We’re using IP from different sources and with different levels of quality. It used to be that we felt like we had to re-verify everything. That’s changed. There’s a lot of trust and a lot of history with companies that have been in the IP business for 15 years. They’ve produced thousands of chips that work. So people are less concerned about re-verification. But the effective re-use of any piece of a verification environment for an IP block is not very well done yet.
Foster: I agree. The IP providers could do a better job of providing collateral that would simplify the integration of the IP. For example, IP at the moment is a black box and when something goes wrong during the integration no one understands it. Even with providing assertions to simplify the debug, you may not have connected it correctly.
Ashar: It also helps at the end of the day if you know what you’re really checking. We had someone tell us that IP was working correctly at 400MHz, and they wanted to know if the frequency was reduced to 200MHz whether the interface would still work at the same power.
Foster: The collateral is more than assertions. For example, the coverage model defines what you need to test when you integrate something. That’s what’s missing a lot of times.
Ashar: Once you have a solutions-based approach, what you need to check is implicit. It’s easier to answer what happens when you use a piece of IP in a different way.



Leave a Reply


(Note: This name will be displayed publicly)