The Next Big Shift In Verification

The requirements for the next era of verification have been set. It should be an interesting 2015.

popularity

We are coming to the end of the year—have you started your Christmas shopping list yet?

For us bloggers, it is time for predictions about what the next year will bring in EDA technology. Three core trends will shape 2015—even more closely connected verification engines, innovations in hardware-assisted development, and software as a driver for verification.

All three core trends are related to the verification of today’s systems on chips (SoCs). How did verification evolve? Our CTO for the System and Verification Group, Ziv Binyamini, just gave a keynote this week at the Haifa Verification Conference, musing about the past of verification and charting a course for where we are going. We are approaching the third wave of verification, “Verification 3.0” so to speak. How did we get here?

In the early days of verification, the “Stone Age,” directed testing dominated verification. Design and verification engineers, at the time a still-developing species, were developing simple ad-hoc testbenches and created tests by hand. This approach was not very scalable, as it required more engineers when more verification was required. As a result, it was very difficult to achieve good quality, and the confidence on how to get there and whether everything was verified was very hard to achieve.

In sync with the era of heavy IP reuse—sometime in the late 1990s to the early 2000s—the era of hardware verification languages (HVLs) began. This is where specific verification languages such as VERA, e, Superlog and eventually SystemVerilog fundamentally changed the verification landscape. Methodologies were developed, including verification methodology manual (VMM), open verification methodology (OVM), and later universal verification methodology (UVM). In this era of verification, constrained-random stimulus automated test creation and coverage metrics were introduced to measure coverage closure. The level of automation involved in this era allowed users to scale verification by automatically generating more tests, and made the HVL-based approaches ideal for exhaustive “bottom-up” IP and subsystem verification.

So what’s next? And why is there a “next” in the first place? The objects to be verified—modern SoCs—have evolved. They now contain many IP functions, from standard I/Os to system infrastructure and differentiating IP. They include many processor cores, both symmetric and asymmetric, both homogenous and heterogeneous. Software executes on these processors, from core functionality such as communication stacks and infrastructure components like Linux and Android operating systems all the way to user applications.

We had lively discussion earlier this year at the industry conferences DVCON (“Big Shift In SoC Verification”) and DATE (“Future SoC Verification Methodology: UVM Evolution or Revolution?”), mostly whether future SoC verification methodologies are a UVM evolution or whether they really require a revolution. The answer is that UVM will not go away—it is fine for the “bottom up” IP and some subsystem verification, and will continue to be used for these applications. However, UVM does not extend to new approaches for “top-down” SoC-level verification. The two main reasons are software and verification re-use between execution engines.

When switching from bottom-up verification to top-down verification, the context changes. In bottom-up verification, the question to verify is how the block or subsystem behaves in its SoC environment. In top-down verification, the correctness of the integrated IP blocks itself is assumed, and verification changes to scenarios describing how the SoC behaves in its system environment. An example scenario may look as follows: “Take a video buffer and convert it to MPEG4 format with medium resolution via any available graphics processor, then transmit the result through the modem via any available communications processor and in parallel decode it using any available graphics processor and display the video stream on any of the SoC displays supporting the resulting resolution.” On top of the sequence of how the hardware blocks in the system interact, this scenario clearly involves a lot of software.

This is where traditional HVL-based techniques run against their limits. They don’t extend well to the software that is key to defining scenarios. Scenarios need to be represented in a way that they can be understood by a variety of users, from SoC architects, hardware developers, and software developers to verification engineers, software test engineers, and post-silicon validation engineers. As modern SoCs have grown in complexity, all available engines—from virtual platforms through RTL simulation, acceleration, and emulation, to FPGA-based prototyping, as well as the prototype chip when back from production —need to be continually used for verification. Cadence put forward this vision back in 2011 with the System Development Suite, combining virtual prototyping, RTL simulation, emulation and FPGA based prototyping into a set of connected engines, which has since then grown to include formal verification as pillar engine, is connected to high-level synthesis and uses verification IP (VIP) as well as debug across all engines. Mentor Graphics announced the Enterprise Verification Platform in April 2014, and Synopsys followed in September 2014 with the Verification Continuum.

ErasOfVerification

Key Cadence innovations in the System Development Suite include the joint infrastructure for Incisive simulation and Palladium acceleration, hot-swap capability for software-based simulation, and a unified front-end for Palladium emulation and Protium FPGA-based prototyping that allows one compiler to target multiple different fabrics. There is more to come in 2015.

Some of the key requirements for top-down scenario-based verification are clear from the limitations of traditional HVL-based approaches. First, system use-case scenarios need to be comprehended by a variety of different users to allow efficient sharing. Second, the resulting test/verification stimulus needs to be portable across different verification engines and even the actual silicon once available, enabling horizontal re-use. Software executing on the processors in the system—which we call software-driven verification—is the most likely candidate. Third and finally, the next wave of verification needs to allow both the IP integration as well as the IP’s operation within its system context to be tested, i.e. vertical re-use. Ziv Binyamini’s keynote closed with the attached graph, nicely summarizing the past and where we are going from here. You can find additional insights from Ziv and Mike Stellfox here and here.

The requirements for the next era of verification—Verification 3.0—are set. It will be an interesting 2015!



1 comments

Tudor Timi says:

You didn’t mention anything about Accellera’s portable stimulus initiative. Does this mean that Cadence isn’t really committed to it?

Leave a Reply


(Note: This name will be displayed publicly)