Systems & Design
SPONSOR BLOG

Top 15 Integrating Points In The Continuum Of Verification Engines

The integration between verification engines is in full swing, with more to come.

popularity

The integration game between the different verification engines, dynamic and static, is in full swing. Jim Hogan talked about the dynamic engines that he dubbed “COVE”, and I recently pointed out a very specific adoption of COVE in my review of some customer examples at DAC 2015 in “Use Model Versatility Is Key for Emulation Returns on Investment”.

Here are my top 15 integrating points, which have a special focus on connecting verification engines, i.e., connecting or using in conjunction virtual prototyping, high-level synthesis, formal verification, RTL simulation, emulation, and FPGA-based prototyping. This list is slightly different than the covered in the article, “Emulation Uses Increase”.

1. Simulation Acceleration (SA): After In-Circuit Emulation (ICE) as the most adopted emulation use model, the connection of the verification computing platform and RTL simulation to achieve accelerated execution is the second largest use model involving acceleration. The design under test (DUT) resides in the emulator and the testbench on the host. Speed is determined by the host execution of the testbench and users typically report between 200X and 300X speedup over pure simulation. Public  references of SA applications with emulation are Advantest, AMD, Broadcom, CSR, Freescale, Fujitsu Labs, LSI, Mediatek, Medtronic, NHDIC, NVIDIA, NuFront, Olympus, PMC Sierra, Ricoh, Samsung, Sigma Designs, and ZTE.
2. Simulation/Emulation Hot Swap: While the SA use model runs any simulation and emulation in parallel, the hot swap capability is unique. Users can run in one environment for a certain time, stop, and switch to the other. Check out how Broadcom describes this approach for gate-level acceleration, using emulation to accelerate through the initialization phase with no timing annotations and then hotswaps back to the Incisive simulator to run with timing once the interesting point in time is reached.
3. Virtual Platform/Emulation Hybrid: Tony Smith of ARM deserves credit for creation of the term “Time to Point of Interest” (TPI), when we were working closely with the his team to enable engineers to use Fast Models from ARM in virtual platforms connected to emulation to get to the points of interest faster and to accelerate software development. NVIDIA, CSR, Broadcom, and ARM have publicly talked about this use model, giving them up to 50X (ARM) and 200X (CSR) faster OS boot and up to 10X faster software execution once the point of interest is reached. And let’s add Spreadtrum to the list, they will talk on Aug. 13 at CDNLive China about “Hybrid Platform Application in Software Debug”. In addition, the integration point for virtual platforms with Fast Models is not limited to emulation. It works with simulation as well.
4. Multi-Fabric Compilation for Hardware Engines: It took us a bit to get this right, but we since last year have a multi-fabric compiler that can target both emulation and FPGA-based prototyping for classic ICE applications. Say goodbye to several month of re-coding, memory re-mapping, and clock juggling that made FPGA-based prototyping a time-consuming and cumbersome task. Engineers can now re-use the same flow for emulation and FPGA-based prototyping, as shown by SRISA, Hitachi and most recently, Sony. Again, this is focused on ICE-type application (no SA channel, no emulation-like debug), but we are seeing between 5MHz and 10MHz out-of-the-box speeds while also allowing users to then manually optimize the design for much higher speed. If that takes, let’s say, four weeks, then they have already run over 12 trillion cycles per system (assuming 5 MHz).
5. UPF/CPF Low-Power Verification: The ability to run CPF and UPF power verification, i.e., to verify that the switching on and off of various on-chip power domains, in emulation and RTL simulation, is new. My colleague Steve Carlson wrote about the specific results that Samsung achieved—between 5X and 32X speedup—in his blog called “The Power of Big Iron”. Most recently AMD’s Virtualization Architect Alex Starr talked about this at DAC.
6. Coverage Merge Between Simulation and Emulation: When different verification engines are used, coverage data needs to be combined from the different sources it can come from. Freescale described this approach in their presentation about coverage-driven validation and in this video. A lot of the concepts and ideas since the publication of this in 2013 have been productized, so check out our latest releases.
7. Formal Verification and Simulation: There is a very productive combination of formal verification and simulation for Assertions, XProp, and what we call Super Linting. When finding an issue in formal, it can be efficiently debugged, and test cases can be automatically created to execute and debug in simulation.
8. Formal Assertion-Based VIP and Emulation: Given that assertion-based VIP (ABVIP) is synthesizable, it can be executed in simulation as well as emulation, complementing VIP and AVIP. At the interface level this technology – it works like an intelligent proof kit – is used to verify correctness of individual interface behavior and offers detailed checks and coverage cross-referenced to protocol specifications, like AMBA4 ACE. Combined with verification of cache coherence, tracking of cache lines and transactions and concurrent monitoring of all master interfaces, the behavior can be checked against the tracked state of the protocol.
9. High-Level Synthesis: Users are moving the abstraction level of verification upwards towards the transaction level. Defining new IP at the transaction level allows verification to run at the transaction level before refining or more detail to the signal level at RTL. Not only does verification move upward, but completely new flows are possible as well, like low-power analysis for new blocks.
10. Verification IP (VIP) and Accelerated VIP: As outlined by Jim Hogan in his COVE concepts, VIP needs to be re-usable across engines. In a success story by Samsung, they showed how in the SSD space they could use VIP and AVP for PCI Express to validate PCIe behavior at the SoC level, integrate and debug host-driver software, and validate end-to-end and bulk DMA transfers. The integration point here is that we have focused on easing usage of VIP and AVIP in conjunction, i.e. being able to re-use test benches for both.
11. Portable Stimulus: This enables reuse and connection points in three ways. There is horizontal reuse of verification across all dynamic engines, even including the chip. There is vertical reuse from IP to subsystems to SoCs and the full system. And then there are reuses across user disciplines, from power to coherency experts, software to hardware. Check out how ST describes their use both at DVCon and DAC, accelerating the test creation by about 20X. Engineers already are generating smart stimuli across virtual platforms, simulation, emulation, FPGA and even post silicon.
12. Hardware/Software Debug: The key here is a new approach to post-process debug using root-cause analysis. This allows debug of embedded software running on processors that are implemented as RTL and executed in simulation or emulation, touching several verification engines.
13. Verification Management: For proper metric-driven verification, planning and tracking is crucial. By connecting to simulation, formal verification, and emulation, it allows engineers to aggregate regression information and verification metrics. This can support aspects like Functional Safety, as described by Freescale earlier this year.
14. Interconnect Performance Analysis: This involves integrating VIP and RTL simulation to enable performance optimization and verification for interconnect—see these references from Cadence and ARM. Long runs in the context of software are crucial for proper performance analysis.
15. Application-Specific Solutions Cross-Platform: As detailed earlier, this involves tailoring specific solutions, namely for ARM v7/v8-based design as I had described at DAC, Functional Safety, Analog Mixed Signal, and Low Power, integrating them across the verification engines.

The original graphic of the Cadence System Development Suite had only four blocks. We have developed quite a bit from here, with six core engines and several cross-engine solutions (like the last five in my list) as indicated in this graphic.

Since Cadence introduced the System Development Suite back in 2011, we have made great progress. Our competition joined the party with their introductions of the Enterprise Verification Platform and the Verification Continuum just last year, which is why it is a great time to point out what already works in our environment, given that we have been on it for four years.

As you can see, integration between the verification engines is in full swing. Expect more as we go forward. One of the big-ticket items to look out for is the automation of integration in general. For example, for interconnect verification automation IWB already automatically generate test benches targeting different platforms (IES, PXP). This will become a big issue in general as re-use across platforms grows.

SDS2015-WithConnections



Leave a Reply


(Note: This name will be displayed publicly)