Leveraging recorded signal activity from previous full-chip simulations accelerates time-to-debug.
There is no doubt that design reuse is essential for today’s massive system on chip (SoC) projects. No team, no matter how large or how talented, can design billions of gates from scratch for each new chip. From the earliest days, development teams have leveraged existing gate level designs and register transfer level (RTL) code whenever possible. The emergence of the commercial intellectual property (IP) business in the 1990s took design reuse to an entirely different level. SoCs often contain tens of thousands of blocks licensed from IP suppliers, electronic design automation (EDA) vendors, silicon foundries, and development partners. Using proven design IP speeds up the project schedule, saves resources, and reduces risk.
Verification reuse also occurs, but it is less effective and more limited. Passive simulation testbench components such as monitors and scoreboards are usually easy to migrate. Active components that generate stimulus can sometimes be reused in similar designs, especially when they conform to the requirements and guidelines of the Universal Verification Methodology (UVM). But UVM provides little help when moving between levels of the design hierarchy. When blocks are connected in subsystems or at the top level of the chip, most of the interfaces that were visible in standalone testbenches are buried within the design. Stimulus is provided by SoC testbenches only for external interfaces.
Minimal component reuse between chip and IP testbenches leads to three significant challenges during verification and debug. When a full SoC test fails, the simulation team tries to identify the block most likely to be the source of the failure. The designers and verification engineers working on that block know little about the complete design or UVM testbench, so they would much rather debug the failure in a standalone testbench. Limited verification reuse means that there is no simple way to replicate the full-chip error scenario at the block level. Considerable manual effort is usually required, including writing tests for the active testbench components that do not exist in the chip-level environment.
If the suspect block is commercial IP, the situation is even worse. The SoC team knows little about the internal workings of the block, especially if it is encrypted, and so wants to hand the failure scenario to the IP supplier for debug. However, neither the full chip design nor its testbench can be shared with the supplier due to proprietary content and IP licensed from other sources. It is hard for a user with limited knowledge of the IP or a supplier with no access to the complete design to debug the failure. It can take weeks of back and forth work to track down a bug in the IP block or an issue with the way that it is being used in the SoC.
Trying to debug the full chip is also challenging because top level simulations compile and run much slower than standalone testbenches. Many modern designs require long initialization sequences before the differentiated portion of each test. These sequences might involve loading initial values into registers or memory, training adaptive interface protocols, and running built in self-test (BIST). It is costly and inefficient to execute the same initialization sequence for every SoC test. Running the sequence once and leveraging it for the rest of the tests would be an effective form of verification reuse.
The third challenge is reuse of assertions. Assertion IP (AIP) can run in simulation as well in formal engines, but it is often developed by a separate team dedicated to formal analysis. To ensure consistency between the two verification approaches, AIP may be added to simulation runs later in the project. This typically requires many re-compilations and re-runs to debug and resolve any differences. Verification reuse during the development and validation of the AIP would save time and resources.
Addressing these three challenges requires significant new capabilities beyond traditional tools. The technique of waveform replay has emerged as a more efficient way to debug simulation tests. Leveraging recorded signal activity from previous full-chip simulations accelerates time-to-debug over manual workflows. It eliminates the manual effort to replicate at the block level a bug found at the SoC level, skip initialization sequences for all but the first test in a regression run, and add assertions to simulations without having to rerun the complete tests. These capabilities are available today with Verdi IDX (Intelligent Debug Acceleration), part of the Synopsys Verdi Automated Debug System.
When SoC tests are run in the Synopsys VCS simulator, the waveforms can be saved in a fast signal database (FSDB) file. Verdi IDX reads in the IP-level FSDB file along with user-provided setup and mapping files, reruns simulations using its tight integration with VCS, and generates new FSDB files and various reports that are passed from the SoC team to the designers. These files are safe to provide to an IP vendor since they contain only information internal to the block and nothing proprietary from the rest of the chip. This leads to a huge increase in debug efficiency.
Verdi IDX also solves the challenge of SoC simulations with long initialization sequences. The FSDB file from a simulation including initialization, when combined with simulation save-restore, allows subsequent simulations to skip the initialization. Since all tests after the initial simulation execute in a much shorter time, each regression run is accelerated. When developing or using AIP, waveform replay allows SoC tests to simulate with assertions but without the testbench. This benefits simulation since the assertions provide additional checks for correctness and benefits formal analysis since AIP is verified against the testbench.
Many SoC teams have found that verification reuse is challenging, encrypted IP is hard to debug, full-chip simulations run too slowly, and AIP is not used to its full benefit. Waveform replay provides an effective solution to all these challenges by enabling automated reuse of simulated waveforms captured in FSDB files. A white paper is available with more details on how Verdi IDX works. This technique has been shown to provide time and resource savings of 10x or more on multiple real-world projects. It is readily available now to design and verification teams around the world.
Leave a Reply