Hybrid Verification: The Only Way Forward

Experts at the table, part 1: Combining virtual prototypes with emulation and FPGAs is the only way to perform SoC verification—and they have to be in a single environment.

popularity

Semiconductor Engineering sat down to discuss the state of the industry for functional verification. The inability of RTL simulation to keep up with verification needs is causing rapid change in the industry. Taking part in the discussion are Harry Foster, chief scientist at Mentor Graphics; Janick Bergeron, fellow at Synopsys; Ken Knowlson, principal engineer at Intel; Bernard Murphy, chief technology officer at Atrenta and Mike Stellfox, fellow at Cadence. What follows are excerpts of that conversation.

roundtablepic

SE: Last year, a similar roundtable was held with many of the same verification experts. During that session, the panelists believed that, as a whole, the industry was doing well in serving the need of their customers for verification. How much has changed over the past year?

Foster: In terms of getting chips out there, while there is room for improvement, we are treading water. There have been no significant improvements over the past year, but there has been a growing recognition that for large designs, emulation is a necessity. People are beginning to get the wheels going in that area. We are responding with ways to do this efficiently and cost effectively.

Bergeron: We have reached a point where we recognize that there is a need for system verification. It used to be an arbitrary label, but now, if it doesn’t fit into a simulator, if the problem is too large to be practical, it becomes a system. This is verified using heterogeneous systems including virtual platforms, , FPGA prototyping and maybe in some special cases bringing in the simulator. This is a new level of verification that requires a different set of tools and things such as random do not apply at all. We are learning along with our customers.

Knowlson: I am a systems engineer and what I am seeing is a new demand for production software validation. We are struggling with that.

Murphy: There is an increasing focus on . At DVCon, everyone was saying that SoC verification is the mess in the middle and we don’t have a fixed methodology for it, but several companies have been doing this for years. They apply more static methods to check that the chip is hooked up correctly. They are not checking bandwidth, just connectivity.

Stellfox: We are seeing new applications for formal in places that were not traditionally associated with formal. These are target applications such as connectivity, register access and security. There are many customers who are struggling and not keeping up, especially in software integration and putting together a cohesive system. Part of this is driving the hardware acceleration business. This is driving new flows and causing a lot of innovation within EDA.

SE: Simulator performance has suffered from the migration to multi-core. They are not getting much more powerful.

Foster: Right. There are classes of design that you can’t partition to make use of multi-core. Amdahl’s Law speaks. Historically, the emulator was treated as something behind a closed door and required a special team and facilities. That was not cost-effective. The change is that it is being viewed as a resource in a data center. Now I can have multiple users accessing it and we will see this becoming a continuum from simulation to emulation.

Stellfox: We see the same thing. Customers are driving the growth and there are increasing use models. Ten years back they were only using in-circuit emulation, but now it a corporate resource used just like a simulation farm. They can run lots of jobs, different types of jobs. Hardware acceleration is also becoming more central to SoC verification, just because of performance. There remains the question of how emulators will keep up. There is no one magic silver bullet. Emulation is one piece of the puzzle. It will always be combined with other things. Hybrid solutions are very popular at the moment. A virtual platform running the software can execute at 100MHz plus, combined with an accelerator running at 2MHz provides effective speeds to look at software integration issues. There is a lot of headroom to be gained from finding clever ways to combine virtual platforms, simulation, acceleration and FPGAs.

Foster: The only way for this to really work is to abstract out the verification process and the underlying engine. How do I create stimulus that can be targeted to everything, including silicon? How can I have metrics that can be abstracted up through the verification process? Up until now, we had individual processes for each of the engines.

Knowlson: I get beat up almost on a daily basis for having too many platforms—a virtual platform, an FPGA, emulation and dozens of models. The software guys only wants to support one, or maybe they will support three, but how are you going to deal with it? We don’t know how to achieve binary compatibility between the platforms. How do you make the VP look the same as the emulator? They shouldn’t care which system they are interacting with. Even if it is a hybrid system, they shouldn’t have to care except for time – how long it takes to get a result.

SE: What are the three platforms?

Knowlson: Virtual platform: register accurate, functional untimed models for software development, because it is fast. I would like to have a fast timing accurate model but these are too slow. They want to be able to boot in two minutes. I am lucky if I can get a hybrid platform to boot in 10 minutes. Then I would like to go to FPGA, but this is hard to bring up. This means I have to go to emulation and do some initial validation in that platform—bring that up to a level where it is functional enough so that I have confidence it will work and I can go to FPGA.

Murphy: What methods are provided for debug in these environments? In an emulator you have triggers and can compile in , but these are limited in scope. You can have many of either – so is there a better way to do debug in these environments?

Stellfox: We don’t have these issues in our platform. We provide full visibility, and we have been trying to abstract the verification process to be independent of the engine. To do this, the underlying engines need to support all of the debug capabilities you are used to in simulation. With software, you need to be able to debug it fully synchronized with the hardware. If you single step some low level firmware, you want to be able to stop and look at the state of the hardware. Assertions are a great debug mechanism and many of our customers use them extensively.

Murphy: We do assertion synthesis and I can imagine putting a lot of assertions in IPs and having them trigger when something goes wrong. Historically there have been limitations with this.

Foster: It is hard to tell the difference between simulation and emulation today. The problem remains a problem with FPGA prototyping or even post silicon and yet it is something that people want to do.

Knowlson: We are capacity limited within an FPGA. I would like to be able to go to a hybrid FPGA environment for development because firmware developers think emulation is too slow, and also to be able use the FPGA for IP development. I would like to use the FPGA environment to find an issue and then switch over to a more visible debug environment.

Bergeron: That is the approach that we favor—having a hardware engine that gives you the same visibility as software comes at a cost. The flip side is you can sample certain regions and reconstruct the intermediate events. If you need full visibility you can get a snapshot from the simulator, and there is a switch to tell it to behave exactly the same as it would in the hardware. Normally, the event ordering is optimized for speed, but in this case it will replicate the action of the hardware.

Knowlson: Yes, this is a problem because the RTL in the FPGA is not the same. I don’t have as many clocks.

Foster: And it is an additional problem if you have to partition across multiple FPGAs.

Stellfox: The approach we take is that we use the same compiler for FPGA and emulator, so once you have brought up a design on the emulator you have reduced the time to bring up the FPGA. You still have limited visibility, so when you have a problem you can quickly replicate that in an emulation environment with full visibility, although it is slower.

Knowlson: That is a challenge. What if it took me an hour on an FPGA to get to an issue? How will it take in emulation?



Leave a Reply


(Note: This name will be displayed publicly)