Which Verification Engine?

Experts at the Table, part 1: No single tool does everything, but do all verification tools have to come from the same vendor?

popularity

Semiconductor Engineering sat down to discuss the state of verification with Jean-Marie Brunet, senior director of marketing for emulation at Mentor, a Siemens Business; Frank Schirrmeister, senior group director for product management at Cadence; Dave Kelf, vice president of marketing at OneSpin Solutions; Adnan Hamid, CEO of Breker; and Sundari Mitra, CEO of NetSpeed Systems. What follows are excerpts of that conversation.

SE: What’s changing in verification?

Schirrmeister: We’re now using a number of different engines for verification. The key is how you go about connecting those engines. Everything is getting faster. The engines are getting faster. There is more capacity. Now the question is how to make it all smarter. Once you have all the engines connected, what do you do with all the data? That’s the next step beyond the connection of the engines.

Mitra: We’re users of a lot of these tools. What we’re missing, and our challenge at NetSpeed, is two-fold. First, we don’t do SoCs. We do interconnects that are very complex. It’s a coherent fabric, but it’s configurable. So every customer of ours is going to configure that based upon their workloads. This is the backbone of an SoC. It’s almost an infinitely configurable space. How do you tackle verification of this? This is one level of issue we have. The second is that, like every smaller company, we are a systems play. Within the chip we do multi-level hierarchical coherency across chips. Coherency is about state. We could be verifying ad nauseum to convince someone that a chip works. If we did not have the ability to emulate, prototype and connect them at different levels, we would not survive. Having the point tools is fantastic, because it gives you faster turnaround time. But when you are doing anything remotely complex, which is required in every SoC today, we need something more.

Brunet: Do we need a point tool or a continuum? We see customers needing both. If one point tool is weak versus another, then having all of these different connected engines is not a real story. At the block level, they care about simulation because simulation is probably the best tool for it. For a full-chip SoC, they’ll need an emulator. For software development, they need FPGA prototyping. So you still need point tools that are best-in-class. Moving back and forth, how do you configure that? Is it dynamic? Is it hardware?

Mitra: There are two levels of configuration. The topology and the architecture are not dynamic, but how you can drill down into the details is dynamic. So when you have data-dependent traffic, that’s different.

Schirrmeister: That’s a unique challenge, though.

Hamid: We’re going more and more toward system-on-chip design. Even the IPs are getting complex. Customers are asking whether they can move the IP with verification to FPGA prototypes. When they’re trying to do integration and test chips, they want to go to faster platforms. The essence of this conversation is how we hook all of these engines together. Portable Stimulus is entirely focused on solving this problem. ‘Let’s not focus on integrating the various verification engines. Let’s come up with the verification intent so we can size the right test case for the right engine.’ And on top of that, vendors are coming up with apps on top of these tools. Anyone building an Arm-based system today has to validate coherency at some level, and it’s a more general problem than just a fabric problem. The ability to transfer between these engines is important, and that’s why this is coming back into focus.

Kelf: We have a long way to go. One of the things we see is that on one hand, companies are traveling down the performance road so we can do more simulation and run more tests and do more things. But there’s a definite shift toward being smarter about verification. How do you take a formal engine and figure out more ingenious ways to try out all of these different states in cache coherency, for example. How can we apply that engine and rely less just on the speed of simulation or emulation? The only way to get through some of the verification challenges, which is going to be the equivalent of 10X to 100X performance over the next few years as we go to autonomous vehicles and machine learning. If you look at cache coherency, that’s nothing compared to the dynamic complexity of machine learning. Some of the core tools are there in verification, but we still have a long way to go. How do you take a formal engine and set it up so that it can solve some of these much bigger problems? We need to pull those together, and there are people working on that. Portable Stimulus is addressing how we use all of these different engines and apply more complex test scenarios and test patterns to the engines in the way that those engines consume them the best. If we can solve that problem, connecting these engines makes much more sense. To do that, verification also will have to be much more collaborative between the vendors and the users.

Schirrmeister: You have configurability as an issue. There is an IP challenge. And there’s an engine you can’t live without. Publicly, the direction is similar. Mentor has a verification platform, which is some collaborative platform with four blocks underneath—formal, emulation, prototyping, virtual and simulation. Synopsys has five blocks as the core under what it calls the virtual continuum. And then we have the verification suite, which has four core blocks underneath—formal, emulation, simulation and prototyping. Within that there are two main trends. One is being smarter. The other is being faster. So are we hiding anything by integrating? I don’t think so. Every engine needs to have a minimum sufficient level to be able to play. If I have a slow simulator, I have an issue. We are expanding into parallel and multi-core. Without it we wouldn’t be able to play. All the core engines need to get faster. And then you need to be smarter. First, you need to figure out what to do with all the data. Which calculations do you use? And then the connection is important, because none of the engines does everything perfectly. You want to do full expressiveness of testbenches and simulation, and then you need the speed of emulation. And then the same is true for emulation and FPGA. You have simulation-like debug, and then you need the speed. It’s like on a soccer team, you want to send the best player in each position onto a field, but then they need to work together to win the game.

Brunet: The EDA vendors are going toward those integrated solutions, but I don’t think that’s what customer wants. They want to be able to replace a point tool within the flow. It’s very difficult for a customer to replace a simulator from Synopsys or move to an emulator. They know we are locking them in a much bigger picture, and that’s not what they want. They want flexibility. So we have to be careful about this story becoming too big for us because then we lose control.

SE: One important thing has changed here. Instead of developing one chip, companies are developing lots of heterogeneous pieces. As they put these devices together the interactions are not always predictable. In the past we’ve verified the chip, but not the whole system. How does that change?

Schirrmeister: That’s already changed. Most guys have technologies they’ve developed internally. At some point you move things from a science-project stage into the verification engine. Portable Stimulus is going from a science project to full-blown adoption, where you define the scenarios and make that available for users. But in the past users had custom methodologies to do this. They were verifying the chip worked with its environment. It was just harder, and it was less exchangeable. So you had a methodology, but you needed to use this virtual bridge or speed bridges. It was much harder to set up. The industry is now making this available across engines. That’s where the whole Portable Stimulus comes in. The next step is exchanging engines, and that comes down to how you exchange data between the different engines.

Kelf: There’s this idea about continuous performance on a continuous path, and that’s the wrong approach. What’s important is how do we get smart about verification and formal verification to tackle things like coherency in a different way. As we get smarter about verification, different companies will come up with different ideas to figure out different problems. They’re going to demand a widget for safety-critical design or a cache-coherent app. Only one or two vendors will have the manpower to create those things, so it becomes a business question. You should demand open standards for coverage, including Portable Stimulus from different places.

Related Stories
Verification’s Breaking Points
Depending on design complexity, memory allocation or a host of other issues, a number of approaches simply can run out of steam.
Verification Unification
Part 1 The verification task relies on both dynamic execution and formal methods today, but those technologies have few connection points. That may be changing with Portable Stimulus.
When Is Verification Complete?
The answer depends on an increasing number of very complicated factors.
Portable Stimulus Status Report
The Early Adopter release of the first new language in 20 years is under review as the deadline approaches.



1 comments

Kev says:

“No single tool does everything,..” because the EDA companies are mostly a conglomeration of point tools from startups. After 20+ years of trying to get Verilog-AMS to work properly and more recently SystemVerilog-AMS (multi-abstraction), I can say the EDA companies much prefer not to have a single tool, and would like to sell you multiple things on different licenses. Attempts to better integrate C++ (SystemC with SV) were also rejected in committees dominated by EDA companies.

I’m hoping, now that Mentor no longer needs to play the game that way, they’ll do the integration, but I’m not holding my breath.

Trying getting an EDA company to admit their digital simulators can’t do power or handle DVFS and body-biasing, let alone a die-stack with sensors and RF…

Leave a Reply


(Note: This name will be displayed publicly)