Hybrid Verification: The Only Way Forward

Experts at the table, part 2: Without a paradigm shift in verification, the industry is in a lot of trouble.


Semiconductor Engineering sat down to discuss the state of the industry for functional verification. The inability of RTL simulation to keep up with verification needs is causing rapid change in the industry. Taking part in the discussion are Harry Foster, chief scientist at Mentor Graphics; Janick Bergeron, fellow at Synopsys; Ken Knowlson, principal engineer at Intel; Bernard Murphy, chief technology officer at Atrenta and Mike Stellfox, fellow at Cadence. In Part One, the panelists discussed the industry challenges and the emerging verification platforms. What follows are excerpts of that conversation.


SE: How are problems such as power verification mapped into an FPGA prototype?

Knowlson: We don’t model the power rail in FPGAs so if I want to do battery management verification then I have to use another platform. That is a significant risk.

Murphy: That is an area where formal can be a strong contributor because you just have too many corners that you have to examine. You have busses crossing multiple power domains, and these are powering on and off, you have security, and there are an exponentially exploding number of options and you can’t do it all in simulation or even prototypes.

SE: Do we have easy model availability to enable the creation of virtual prototypes?

Foster: We are making some progress. Part of the problem is that a lot of customers hold the view that they can only afford to do two, maybe three, models of the design. The limiting factor is on the customer side. They have to do an RTL model, they may do an FPGA prototype and then they ask: do I have to do a virtual prototype? When we discuss making progress, you need to have customers beating on you.

Bergeron: On the flip side, it pushes us to make it easier for them to adopt. It relies on having a good catalog of IPs, all at the same level of abstraction. It often turns into a consulting project for us. It is often more efficient for them to get us to do it and get them started and then they can run with it.

Knowlson: We do pull in some models, and we’ve run into some interesting things, such as they don’t run so well with IA. We do have a good catalog. We have made a lot of progress in the last few years developing our own database. We’re close to having a pretty good internal database for the products we do. So lots of progress there.

SE: So presumably there’s a formal commitment that you’ve made.

Knowlson: Right.

Stellfox: There are only a few big customers who can invest to create a full set of models for their IP portfolios. I see a lot of customers moving to virtual platforms but they do the minimal amount of modeling that they need to bring it up and get software started. That is also where hybrid is very attractive because you can focus in on the application processor sub-system and use the RTL with a fast FPGA or emulator.

Bergeron: A virtual platform makes a lot of sense for companies that do a lot of derivative designs. They take what they are not changing and slap it on an FPGA board and that is it. The rest stays in the software. Then it is hardened into the emulator and as they harden they are moved onto the prototyping board. This is faster and cheaper.

Knowlson: What about n-1 or using real silicon from a previous generation? Is this more attractive than a virtual prototype? My customers want the fastest system and even a fast virtual platform can be significantly slower than actual silicon. If I am working on a derivative, wouldn’t I use that hooked up to an FPGA for the delta?

Bergeron: If you can do that with the existing hardware, but the problem is when do you deliver the hardware for that new bit. That becomes the challenge. The software is probably a derivative as well.

SE: Doesn’t that depend if you are attempting to perform sub-system or system verification?

Bergeron: Today we can probably say that IP and sub-system verification is probably using simulation, UVM and a coverage-driven verification methodology. Anything above that, system verification, cannot be done using pure simulation. The virtual platform is one part of it. Some companies are there because they have people who want to invest in it, but the VP alone is not sufficient. We need to hook it up to emulation and prototyping boards and sometimes to the simulation.

Foster: We as an industry haven’t come to agreement on terms. I don’t like using the term system just because it means something bigger than a simulator.

Stellfox: We also see UVM being applied to acceleration. Its sweet spot is sub-systems. If you take a PCI express sub-system, this is pretty slow in simulation, but work well in an emulator.

Bergeron: Right, it cranks faster, but in a virtual platform, the way you verify is different. The main emphasis for sub-systems is on simulation, or you accelerate it with hardware, but the process remains the same.

SE: At DVCon this year, the industry acknowledged that it has a hole that needs to get filled. That holes relates to system-level verification and the need for portable stimulus.

Foster: That is true. SystemVerilog is now 11 or 12 years old and it solves a problem that existed 5 years before it emerged. We should be talking about something new.

Murphy: There is a trend, where I see a lot more sub-system creation being done in an automated way. ARM just bought Duolog as a way to assemble sub-systems, but there are a lot of captive sub-systems things around ARM. They put together cores and caches and interrupt controllers and little bits of AMBA in various flavors. They are instantiating technology specific memories – but in a very recipe driven way. This is not random – it is a standard sequence of sub-system assembly. Does this simplify the verification problem?

SE: Given that so many designs today are based on IP blocks and are derivative designs, should we not be considering a verification paradigm that no longer concentrates on simulating the leaf level, but elevates the notion of verification?

Stellfox: Even though we have been talking about IP-based design for a long time, the design process that most customers follow is antiquated around the flows they have built up for RTL-based design. There is an opportunity to revamp the entire development flow for putting together SoCs.

Foster: The realization is that there was always a belief that they could scale what they had been doing. It doesn’t work. Suddenly people realized this and are asking: what are we going to do next?

SE: Are we heading towards a paradigm shift in verification?

Foster: Let’s hope so. If we don’t we have a problem.

Leave a Reply

(Note: This name will be displayed publicly)