Re-using Common Simulation Set-Up Processes To Speed Regression

A new reuse methodology shortens drawn-out design verification cycles.

popularity

Functional verification of SoCs always has some kind of set up process. For complex SoCs, at least, this initial set up phase often consumes from 20 to 90% of each test’s total simulation time. And thousands of tests are run in the verification of a design.

This set up phase could be either executing the exact same sequence of simulation steps, or programming the design to reach the same initialization or reset state. A lot of time is wasted by going through this initial set up phase for each and every test run, even though each test often uses the identical process. Remarkably, this is standard procedure.

It would be much more efficient, and save an enormous amount of time, to avoid repeating the same set up phase process for all of these thousands of tests. This was the motivation behind the approach set forth in this article. Our methodology results in an enormous throughput gain for large and complex design verification systems by avoiding repeated reruns of the initial set up phase for the individual tests.

Simulation set up reuse methodology
Once you have identified a set of common simulation steps for a set of regressions tests, our methodology flow looks like this:


Figure 1: High-level description of steps to re-factor common initial setup phase and running different tests

This methodology works for HDL designs; i.e., Verilog, SystemVerilog, VHDL, UVM, or any combination thereof. It generally cannot be used with designs containing SystemC code, except (with certain restrictions) where SystemC wrapper models are lain over Verilog/VHDL design units in order to connect with co- simulation or emulation environments.

Checkpointing and restoring a design state has been difficult to reliably use in extremely large and complex systems that involve multiple HDL languages, C/C++ models, IP, DPI, SystemC, PLI/FLI/VPI, etc. This methodology can be partially automated with the help of a simulator and requires only initial design set up and minimal modification by the designer to make it work on their existing, suitable designs.

You can also checkpoint and restore the design state on different machines and/or on grid machines. This allows you to integrate the methodology in your existing grid flow. However, you should ensure that the machine where the design state is being checkpointed and the machine where the design state is restored should have similar OS specifications. Different OS systems may have different memory mappings for low-level system libraries and can cause issues during state restore. Typically grid systems (like LSF and SGE) have controls to allow you to submit jobs on similar types of machines (such as SLES11 or RHEL7) and prevent such issues from occurring.

This flow can be used in co-simulation or emulation environments as well, provided the other side of the simulator connection is able to handle checkpoint and restore of the design state in its domain. The simulator can handle the checkpoint/restore in a similar way as it does in pure simulation mode. This speeds up the set up phase on the pure simulation side so you can get to the interesting traffic generation portion on the emulator quickly and get more done in the same verification cycle.

There could still be some complications and complexities in a design and/or flow, which may prompt a user to continue verification in their existing way. But many designs can still benefit and you can save simulation cycles by spending some initial time on setting up and deploying this methodology.

Results of deploying the methodology
This methodology has been deployed successfully at many large design houses using the Questa simulator from Mentor, a Siemens Business. We will share data from one of these customer deployments.

The design was a UVM-based SoC with C/C++ IP models and a PHY component. The design had around 1000 tests in one regression suite with each test taking about 10 hours to run. The time spent to simulate the initial set up phase was approximately two hours for each test. Hence, the total serial time to run the regressions was roughly 10,000 hours. However, the regression runs on a grid system, thus, taking 20 parallel machines into consideration, the ideal regression throughput would be approximately 500 hours.


Figure 2: Regression flow of a large SoC, before refactoring common set up phase of tests

We worked with the customer design team to make the small number of required changes for the flow to work properly. The design team was then able to reduce all of their tests by approximately two hours (by eliminating the repetition of common set up phase time), and they were able to achieve a regression throughput with the 20 grid machines of approximately 402 hours: roughly a 20% savings in their regressions. Proportionally higher regression throughput improvement is possible for designs where the initial set up phase consumes a larger percentage of total simulation time.


Figure 3: Regression flow after separating the common set up phase reused by all tests, yielding a 20% saving on regression time

Using a simulator’s automation for this methodology, you should be able to make minimal modifications to your design testbenches and easily integrate the methodology into your existing design verification environments. The methodology can also be deployed in co-simulation and emulation environments. Although the Questa simulator was used in the real-world example and test output referenced in this article, it should be possible to use this methodology with other industry simulators as well.

Please download the whitepaper, Boosting Regression Throughput by Reusing Setup Phase Simulation, for greater detail on this big time saver.

In the full paper we identify the type of designs that are appropriate for this methodology and what you can do to make your design suitable for it. We also explain the constraints that need to be followed and the design factors that might prevent you or your verification team from adopting this methodology and how to overcome them. You will also learn about the requirements for co-simulation (simulation-emulation) verification environments, and how to make the methodology work with either simulation or co-simulation. We also include several code snippets to illustrate the methodology and full code examples of two test examples.



Leave a Reply


(Note: This name will be displayed publicly)