Systems & Design
SPONSOR BLOG

Observation Post

A technologist’s perspective of DVCon 2013, and where the changes are happening.

popularity

By Pranav Ashar
After attending the 2013 Design and Verification Conference (DVCon) in San Jose, Calif., I have compiled notes as both an observer and a panel participant. Here are my observations:

Wally Rhines, CEO of Mentor Graphics, gave the keynote presentation: Accelerating EDA Innovation Through SoC Design Methodology Convergence. Logically and effectively he made the case that SoC integration is much better understood today as the IP assembly regimen has taken hold. He also addressed another new aspect of SoC realization—the additional layers of specification and definition beyond base digital functionality that are absolutely required before the chip is taped out—by suggesting that one key SoC trend is that power management in SoCs has become universal. He also pointed to multi-engine verification as a key enabler, along with application-specific verification.

I was interested in Wally’s remarks because, in my view, it has become the norm for SoC design companies to compete largely on execution of integration based on similar reference designs. The performance, power efficiency, and the feature set of the SoC are based on the timely delivery of aggressive integration rather than on novel individual component blocks. One consequence is that verification concerns are moving to interfaces between IP blocks and away from unit-block internals. For example, clock-domain crossing verification (CDC)— a key concern as large numbers of diverse blocks are glued together—has become a signoff requirement in all SoCs today.

I think Wally’s note about multi-engine verification as a key enabler rings true, and it is my view that it and application-specific verification actually go hand-in-hand. As soon as the verification concerns become well defined, it becomes easier for various verification engines to communicate with each other and benefit from each other’s strengths.

I’ve witnessed the benefits of combining static structural analysis with formal analysis to derive the spec automatically and in higher-resolution debug. I also see links between static techniques and simulation as low-hanging fruit yet to be deployed in design flows. For example, in the context of clock-domain checking and X-management, static analysis can be used to instrument the testbench for finer-resolution monitoring and coverage, to set up constraints for stimulus generation, to create scenarios in simulation that recreate verification soft spots, and to pass down results derived in static analysis.

It’s also my view that DFT, SDC timing constraints and initialization/reset add layers of specification, functionality and complexity to the SoC also must be addressed during verification. While these SoC realization issues make verification harder, they are essentially orthogonal verification concerns. Until recently, all we had to work with was the separation between timing and functionality. Simulation would verify functionality and static timing analysis would verify timing. While revolutionary in the ’90s, the functionality-timing separation has run its course in the SoC era. It does not provide enough fine-grain resolution of verification concerns to make a meaningful dent in complexity, and newer verification concerns like asynchronous clock-domain crossings are at the intersection of functionality and timing and need specialized techniques that combine notions of both to effectively verify behavior.

My takeaway from the keynote is that we now have the ability to identify orthogonal verification concerns at a much finer granularity and that this separation has come about based on how SoCs are put together. The verification community needs to take this phenomenon to heart and use it in process definition and tool development. Some of this is already happening (CDC verification, connectivity verification and a few other examples), but much more is possible.

The panel
The Wednesday morning panel,“Where does design end and verification begin?” included John Goodenough of ARMHarry Foster of Mentor Graphics, Oren Katzir of Intel, Gary Smith of Gary Smith EDA, and me, and was moderated by Brian Hunter of Cavium. Several panelists mentioned that an application-oriented verification approach resulted in static techniques having a much more significant impact than in the past. They said it is becoming possible for designers to provide meaningful closure (signoff) on a number of concerns before the design is handed over to the verification team. The biggest takeaway I heard from the audience was an appreciation of the amount of verification possible prior to simulation.

I agree that this idea of application-specific verification gives the EDA community the opportunity to develop tools that provide complete solutions to the well-defined verification concerns, rather than just tools that provide raw technologies that the user must then figure out how to apply. These complete solutions are, in fact, a synthesis of various technologies that have been around for many years, but their synthesis is enabled by the narrow scope of the verification concern being addressed. For example, structural analysis and formal analysis work very well together in addressing the CDC verification problem. Smart structural analysis provides a baseline analysis of the design in the context of the narrow circuit idioms used to implement clock-domain crossings and creates a formal specification for the formal analysis tools to work on. The narrow scope of each verification concern along with the bringing together of multiple technologies in the solution allows many of these verification tasks to be completed pre-simulation (i.e. statically). This is an important, but not widely acknowledged, enhancement to existing design and verification processes.

The papers
A number of papers were presented in the technical sessions.

Stuart Sutherland, self-described SystemVerilog Wizard and Consultant, presented Paper 6.2 titled. “I’m Still In Love With My X! (But, do I Want My X to be an Optimist, a Pessimist, or Eliminated?)” He discussed the topic of managing verification complexity in the presence of unknowns (X’s) in simulation. In my view, this once manageable problem is now consuming much more of the verification budget in the SoC era as power management has become universal, as functional reset has become more complex, and as each block designed into a chip is farther removed from the system-level constraints and spec. As a result, X-management has become a separate verification concern of sign-off proportions.

Some background: X’s in simulation can lead to missed bugs in RTL. This concern causes the verification team to simulate with X’s at the gate-level. The downside, apart from the reality that gate-level simulation is very slow, is that X’s in gate-level simulation cause the simulation to be pessimistic, i.e. for more X’s to be propagated than is truly necessary. The verification engineer must now first figure out whether the X in gate-level simulation is genuine before figuring out whether there is a bug in the design. In short, the gate-level simulation solution for managing X’s does not scale to the SoC era.

Static analysis at the register-transfer level has the potential to highlight RTL code that could cause RTL simulation to miss X-related bugs. The application-specific solution finds the X-related bugs by formal analysis of X propagation, and then instruments the testbench so that X-related issues are highlighted with finer resolution in RTL simulation.

In my view, this can be a much more efficient way to find these bugs and to take a big load off the verification team by removing yet another reason to simulate at the gate level and use up valuable verification work-hours.

Nevertheless, X management and reset analysis are tied at the hip because many of the X’s in simulation come from uninitialized flip-flops and, conversely, the pitfalls of X’s in simulation compromise the ability to arrive at a clear understanding of the resetability of the design.

A poster paper (1P.7) titled “Using formal techniques to verify SoC reset schemes” from MediaTek and Mentor Graphics touched on the topic of how reset schemes have become much more complex as diverse components are integrated into an SoC and as reset network optimization becomes a first-order goal. This aligns with the trend that a well-scoped narrow verification concern can effectively leverage the power of formal analysis.

Session 9 addressed the topic of verification solutions for power management and Paper 9.2 titled “Power aware verification strategies for SoCs” by Cypress Semiconductor was quite comprehensive. It highlighted the benefits of using a common power management spec for verification and design, and the use of RTL power management verification. The paper provided a complete list of power management concerns and tips to find bugs early. I found it to be an excellent primer for an EDA company looking to build a definitive feature list for its power management verification product.

A few papers addressed verification concerns arising from the seemingly simple matter of assembling IP blocks together to build an SoC.

Paper 3.2 titled “Using formal verification to exhaustively verify SoC assemblies” from ST-Ericsson and Mentor Graphics highlighted these verification concerns and viable solutions. With the variety of configurations each IP block can be used in, there is potential for errors that, if not found early, can confound the subsequent full-chip verification. Getting this right upfront is an important part of a design/verification process of clean hand-offs and systematic progress.

Poster paper P1.6 titled “A reusable, scalable formal app for verifying any configuration of 3D IC connectivity” from Xilinx and Cadence highlighted the benefits of formal connectivity checking in the 3D chip context where the assembly challenge is even greater.

What’s happening here?
Connectivity checking of control/data signals, clocks, resets, DFT signals and power management signals is an example of a basic early verification that represents a narrow verification concern with a disproportionate impact on overall verification productivity. A combination of static structural and formal analyses works effectively and almost automatically in this step. Such checking is essentially about making sure that the design is setup correctly before further verification is performed.

As a result of the navigation of the verification process by verification concern, much of the assertion-development battle has been won or sidestepped. A lot of the formal specification is now implicit in the verification concerns being addressed. As many verification tools and papers in this conference are highlighting, the derivation of formal checking obligations for CDC, SDC timing constraints, power management, DFT, basic setup/connectivity, X-management/resetablity is now largely automatic based on the RTL or collateral code that must be written anyway for the design.

On the other hand, the pinning down of checking obligations on the functional front has been a little more of a reach. Even so, good progress is being made. For example, Real Intent pioneered the extraction of checks automatically based on typical RTL idioms to highlight basic malfunctions. As examples, deadlocked/unreachable FSM states/transitions, non-performing nets/registers/expressions, unreachable control-flow branches, X-assignments and so on, serve as excellent warnings of design bugs. These checks can be extracted automatically from the RTL and checked formally before simulation. Cadence, Mentor Graphics, Jasper and Atrenta all provide similar functionality with varying degrees of check resolution and analysis efficacy.

Paper 3.1 titled “Using formal verification to validate legal configurations, find design bugs and improve testbench and software specifications” from Xilinx and Cadence highlighted the benefits of extracting checks automatically from the RTL and checking them formally prior to simulation. I thought they did a good job explaining the benefits of this approach.

Atrenta had a tutorial titled “Achieving visibility into the functional verification process using assertion synthesis,” describing their assertion synthesis approach. Jasper, Synopsys, Cadence and Mentor Graphics also claim to provide this ability. In theory, simulation output is an excellent resource to mine for deriving assertions, coverage targets and constraints.

Further inroads into automatic assertions that capture higher-level intended functionality are still a work in progress in the EDA community. The generated assertions are still of low quality, requiring much manual review. Even so, these automatically generated assertions do play a useful role in providing nontrivial insight into the design. In my view, this is a step, even if incremental, that moves in the right direction. It is better to provide a list and hints to the verification engineers than to rely entirely on their creativity and generally incomplete/imperfect design knowledge to come up with assertions.

A further refinement of the automatic functional-assertion generation process is required where the assertion generation is limited to a narrower scope and better understood design idioms with a separation of concerns for protocol compliance, data integrity. This approach was seen in the Paper 2.1 titled “Overcoming AXI Asynchronous Bridge Verification Challenges with AXI Assertion-Based Verification IP (ABVIP) and Formal Datapath Scoreboards” by ST-Ericsson and Cadence.

Paper 1.3 titled “On verification coverage metrics in formal verification and speeding verification closure with UCIS coverage interoperability standard” was presented by Jasper. The Unified Coverage Interoperability Standard (UCIS) is an important initiative to get static and simulation-based technologies to communicate more effectively. It provided a good introduction to the benefits of UCIS-based back-and-forth communication between assertion-based formal verification and testbench-oriented simulation. It pointed to the low-hanging fruit for verification that lets the simulator know of unreachable targets so that simulation cycles are not wasted trying to cover them and, vice-versa, for the simulator to let the formal analysis know of targets already reached. Further efficiencies and process improvements are to be had based on finer resolution interpretation of the formal and simulation results. Ultimately, UCIS allows a combined presentation to the user of results from simulation and formal analyses.

The trend
Finally, the extent to which static analysis and application-specific verification has taken hold was underscored by the tutorial themes. As many as four tutorials covered the topic:

• Tutorial 3: “Low power design, verification and implementation with IEEE 1801 UPF” from Mentor Graphics.
• Tutorial 8: “Achieving visibility into the functional verification process using assertion synthesis” from Atrenta.
• Tutorial 9: “A formal approach to low power verification” from Jasper
• Tutorial 10: “Pre-simulation verification for RTL sign-off” from Real Intent, Calypto and DeFacTo covered static techniques for lint, automatic functional formal, X-management and resetability analysis, SDC constraints checking and management, clock-domain checking, power management optimization and verification, and DFT verification.

What I heard in these four tutorials was the extent to which static analysis and application-specific verification has taken root.

—Pranav Ashar is the chief technology officer at Real Intent.



Leave a Reply


(Note: This name will be displayed publicly)