What Really Matters: User Care-Abouts In Hardware-Assisted Verification

Getting rid of bugs as fast and efficiently as possible and improving confidence in correctness is critical.

popularity

By Frank Schirrmeister
Sports analogies often work well and, most certainly, they do for electronics development. When again I ran across the VISA advertisement in which Dick Fosbury is featured with his win in the high-jump competition at the 1968 Summer Olympics, I had to smile as it reminded me of hardware-assisted verification (I know, I know…twisted, you might say). Just as Fosbury changed the world of high jump forever with athletes switching from the straddle to the flop technique, processor-based emulation has changed the world of hardware-assisted verification forever, as I have written before.

In the recent month, there has been a lot of talk about emulation, and some grandiose claims were made with respect to some of the emulators. Unfortunately, some of the discussion is losing sight of what the users are actually trying to achieve. To stay true to the high-jump analogy, I took a couple of minutes and looked for analysis of the different techniques and some quantitative assessments. Well, there are some quantitative analyses out there like model technique analysis sheets and even a detailed comparison of the straddle and the flop. They look at the effort needed to perform the jump, the best number of steps, arm techniques used, effort to learn the methods, etc. But at the end there is only one goal – to jump over a bar as high as possible. All the effort going into the jump is just a means to an end.

Similarly, some of the discussion today suggests that issues like emulator power consumption, its footprint, and even its weight ought to be primary decision criteria. Well, they are most certainly part of the equation, but they lack understanding of what the user’s end goal is all about. Had taking fewer steps or a less curved approach to the hurdle been the primary decision criteria for Fosbury, the straddle would probably still be the primary high jump technique. Similarly, would you buy a car that consumes less gasoline, but does not get you where you need to be?

So what is the end goal for using hardware-assisted verification? When listening to users, it is all about the ability to remove bugs as fast and efficiently as possible, getting the design to a point that the confidence for its correctness is high enough that tape out can be done. It all boils down to the loop of: (1) design bring-up, (2) execution of test, (3) efficient debug, (4) bug fix, and back to step 1 to confirm that the bug has indeed been removed. This cycle is beautifully shown, for example, at the beginning of a presentation called by Alex Starr of AMD, followed by a description of how processor-based emulation offers the right debug productivity to efficiently remove bugs.

DSC06645-300x191

User Care-Abouts for Prototyping & Emulation

 

Earlier this year I was on a prototyping panel at DATE in Grenoble, together with Synopsys, Xilinx, and Infotech. Bipin R.R., general manager of the Hi Tech Business Unit at Infotech, showed a slide of care-abouts they are considering when choosing the right prototyping engine (see above). High on the list were “execution speed” – how fast a test can be performed, “bring-up cost” – how fast RTL can be mapped into a hardware platform, “debug insight” – how well the design can be observed (both hardware and software) and “execution control” – how well a design can be stopped for debug. Together with Ivo Bolsens of Xilinx, we also discussed at that panel the advantages of the debug loop in processor-based emulation during their development of the Zynq platform. There is actually a good overview in a Xilinx presentation from DAC 2012. At 6:45 in “From RTL to Success with Emulation,” Peter Ryser from Xilinx compares the different techniques and at 8:55, he describes how debug is done in emulation after bugs have been found in FPGA-based prototyping – emphasizing the need for debug insight into the hardware.

So is debug productivity all there is? It is certainly an important one, but not the only one. It is all about getting out the right product at the right time without bugs and best product quality. When looking at customer care-abouts enabling that, in our experience three categories stick out and really matter.

First, “Productivity” is a more general aspect not limited to execution speed only. It includes other aspects like emulation capacity, debug visibility (as mentioned above), number of users that can use the emulator in parallel, available memory capacity, and bring-up time. Features like debug trace depth and number of parallel possible users can mean that an offering with a buffer not big enough or being accessible only by 1/16th of the users, needs to run multiple times to get the same amount of debug information, eliminating very fast any alleged advantages in power consumption as they need to run multiple times to yield the same results, which also takes much longer.

Second, “Predictability” is important and relates to the partitioning aspects and compile time required – processor-based emulation is very predictable with a compiler, while FPGA-based prototyping and FPGA-based emulation have challenges due to complex FPGA routing and interconnect issues.

Third, “Versatility” relates to the tasks that can be executed with hardware-assisted verification. This includes the choice of target designs (SoCs and CPUs), APIs to RTL and TLM simulation, availability of transactors and rate adaptors to connect to real-world interfaces, support for advanced verification like coverage support, and native support for verification languages. In addition, this includes support for acceleration use models (transaction based, signal based, in-circuit acceleration and gate-level acceleration) and emulation use models (in-circuit, synthesizable testbenches, dynamic power analysis, and HW/SW co-verification).

To achieve these three – productivity and predictability leading to better time to market and versatility for optimizing the investment in hardware-assisted verification – users have to consider the cost associated with these results. That cost includes the actual acquisition of the hardware itself and operating cost like power consumption, lab cost providing space, etc. Just like Fosbury’s primary care-about was to pass a bar that is set higher, at the end of the day the primary customer interest in hardware-assisted verification is about systems on silicon that need to be delivered on time with the right features and as many bugs removed as possible.

As described above, “debug productivity” is a key aspect to that, but not the only one. I will provide more details on the three key categories of productivity, predictability, and versatility in future blog posts.

—Frank Schirrmeister is group director for product marketing of the System Development Suite at Cadence.