Experts at the Table, part 2: Finding the right balance of performance, visibility, turn-around time and verification objectives.
Semiconductor Engineering sat down to discuss the growing usage of hybrid verification approaches with Frank Schirrmeister, senior group director of product management & marketing for Cadence; Russ Klein, program director for pre-silicon debug products at Mentor, a Siemens Business; , chief architect for Montana Systems; and Kalpesh Sanghvi, technical sales manager for IP & Platforms at Open-Silicon. Part one can be found here. What follows are excerpts of that conversation.
SE: If the transition between simulation, emulation and FPGA prototyping is getting easier, are we seeing the latter two eat away at simulation?
Schirrmeister: You do different things are different levels. The price difference is still there. You could make the same argument about formal and simulation. We have some users that no longer use simulation for some blocks, and that scares me. But they are comfortable with that. I still need to see things wiggle. But no engine ever goes away because we are dealing with an NP complete problem. Simulation can do some things that emulation can’t do. We have not yet taught emulators to do timing, so final tape-out is still gated by simulation, which is gate-level simulation, and which needs timing. It is a balance, and we will get smarter with balancing which engines to use when.
Moorby: But you will always have the problem with simulation that you may not be able to run enough stimulus to get to the problems.
Schirrmeister: With emulation and even more so with FPGA, you get the performance but not the debug visibility.
Klein: There are tradeoffs with the engines. Some do certain things well but other things poorly. If you have non-synthesizable constructs in your design, then it is tough to get that into an emulator. While Nirvana will be a Turing test of these engines, I should not care what engine is used so long as I get the necessary results back. But there are times when something does need to be run in a particular engine.
Schirrmeister: And there is the question about when do we know we are done. That is a real problem and you are never really done, so the question needs to be rephrased as, ‘When do you have enough confidence that you will not lose your job if you tape out?’
Moorby: It is an asymptotic function and you will never actually get there.
Sanghvi: We had a design where we used both emulation and simulation. We used simulation for most of the verification suite, but one sub-system needed a lot of use cases to be run on it. So we moved that onto emulation and ran the entire sub-system with different use cases. It also needed a real-world interface, which is available for emulation and FPGA but not available for simulation. That is how we decided on the approach to be taken to partition the design and to enable the verification to be done. We should have gone to emulation earlier.
Moorby: Are you saying that if you know you will end up on the emulator, that you shouldn’t touch a software simulator?
Sanghvi: We started with a simulation environment. We just should have made the transition sooner than we did.
Moorby: That means that the software simulator still serves a useful function, even though it goes slower.
Schirrmeister: This is a question of balance. I look after emulation and FPGA products, and that is an interesting balance, as well. You have to be careful what you do where and when to switch. There are some things I can only do in emulation, but for other things when I am not looking to do advanced debug, then the FPGA gives you the performance. There are customers who use emulation more for software development. Did they buy the emulator for software development? No, they have it available and if there are no higher priority tasks, they make use of it. It runs at a MHz and by using the ARM fast models you get to an effective speed of about 50MHz, depending on the types of things you are doing. In the past we had the issue that it took a long time to bring up an FPGA prototype and required a team of 3 or 4 people. Today, we have created more levels: there is emulation; there is FPGA-based emulation; FPGA-based prototyping with an automated flow and then a manually optimized FPGA prototype if you have the necessary resources and need.
Klein: There are also cases where the simulator shines. Think about cycle times. If you are writing some RTL and you want to see it run, compiling it and running it in simulation may take 5 to 10 seconds. For emulation, I have to compile, synthesize, place and route it, get it onto the emulator and all of that takes time. For a small design that may be a couple of minutes, but consider what it will do to your code writing productivity.
Moorby: This is when you are getting the raw coding correct and just running a few cycles. Then you start the real verification task. When formal came out they called simulation ‘informal verification,’ and the software team has struggled to even become ‘semi-formal.’ If you have a situation where you are using constrained random—when you throw random stuff at a solution, you quickly find bugs, but after a certain time no matter how much more you do, you don’t make any progress—it is not asymptotic. There is a gap that you never fill because the bug could exist in the specification and is only found once the product is out. A possible danger with emulation is that you may have done more verification, but you still have done anything to fill that gap.
Schirrmeister: People are assigning value to the types of bugs that they find. What you are describing is the distinction between verification and validation. Validation is making sure the original intent was right and verification is making sure that the spec as written is met. Two different things. That is why we often talk about FPGA prototyping as being system validation because you are now fast enough that you can plug it in and see how it works. There are a set of high-value bugs that are very hard to find and 95% of the bugs are found in the first 20% of your cycles. The challenge with Portable Stimulus is that the type of bugs that you find at that level of system integration are so hard to find, that even if the IP is verified it can take days or weeks to find bugs. At the SoC-level it is harder to find bugs and so the bug distribution will be very different.
Moorby: All of the tools associated with SystemVerilog are from the era when they were doing a good job. We needed it and it was standardized.
SE: If the verification task is spread across multiple engines, how do you now bring all of the data back to provide a unified model for completeness?
Schirrmeister: There are databases, such as the UCIS. In a practical way, this is all about metrics and measurements. We see people being very structured with verification planning and then you merge coverage from different engines. Again we are not fully there yet, but we can merge coverage from emulation, simulation and formal. It all ties back to the verification plan. It is also how you find out if things add to the coverage.
Klein: We have been working with the UCIS database for a while now and it is the intention that all of these engines will feed their results into common tools that will keep track of it.
Schirrmeister: Customers keep track of how much they spend on simulation, how much on emulation, how much on formal, and which tasks are completed with which engine.
SE: We have talked about model portability and we have touched on this but have not really talked about the evolution across the platforms. There are different degrees of interactivity available on these platforms.
Schirrmeister: The focus of verification changes. The object of verification is functionality. What I am verifying with Portable Stimulus? If it find bugs in the functionality of the IP, then something has gone wrong. I should have found them before that point. So, the object of verification for those tools is the verification of the integration; that the items work together. This is not verifying base functionality, you are verifying that when you integrate them, and the testbench abstraction is part of this, you find things that you would not have looked for.
Klein: There is a great deal of formality at the block level because this is what the industry was focused on for a very long time. Getting the individual IP blocks verified. The industry did a great job at this. Now we are looking at taking collections of IP and making sure they work, and there has been less work figuring out what data needs to be collected, how to define what coverage means in the context of 50 IP blocks? The industry still has to work on formalizing that.
Schirrmeister: The objectives of verification change and you may start to look at performance. Some people are concerned with performance validation when they put everything together. When the blocks are all trying to talk to each other and access memory, is the performance I intended for each block still met?
Klein: And there are new types of analysis that become interesting when you get to the system level. One customer had bus monitors throughout the design and they ran a lot of software through it and the monitors said that everything was correct, but when they booted Linux, part way through it fell over. The same version of Linux ran on the prototype and on the real board. We collected all of the memory transactions as seen by the hardware and then collected all of the transaction as seen by the software, and when lined up we found one that was different. That was the bug and finding that is different from the types of things we have done in the past.
Moorby: This reminds of when we started to talk about top-down design. The promise of having behavioral Verilog was that you could get this working very quickly and then you had all of these blocks that were interconnected and the idea was that you take one of the blocks and plug in the gate-level version of it. The rest of the system stays the same except for that one block. In reality, you find out that the connections to the rest of world completely change. The interface abstraction has to be involved with this and when you get this type of subtle bugs. It is the interfaces between the blocks that are the problem. When you are dealing with multiple engines, the complexity is the same — do they talk to each other in the right way?
Related Stories
Hybrid Emulation (Part 1)
Using a single execution engine for verification tasks is quickly becoming the exception as users try to balance performance, accuracy and context.
Hybrid Simulation Picks Up Steam
Using a combination of simulation and emulation can be beneficial to an SoC design project, but it isn’t always easy.
Emulation’s Footprint Grows
Why emulators are suddenly indispensable to a growing number of companies, and what comes next.
FPGA Prototyping Gains Ground
The popular design methodology enables more sophisticated hardware/software verification before first silicon becomes available.
Leave a Reply