Graphing Toward Standardization

The industry seems unified when it comes to needing new technologies for system-level verification, but have we explored the possibilities enough before standardizing on one?


Graph-based verification has become the hot topic of the day. It commanded a lot of attention at the recent DVCon, promises to fix many of the problems plaguing functional verification, can provide an automated way to perform system-level verification, enables portability of tests between simulation, emulation and prototyping, reduces the wastage created by constrained random test pattern generation being applied to the Universal Verification Methodology (UVM) and SystemVerilog, and so much more. No wonder several companies are innovating as fast as they can in this area and see great promise for the emerging technologies.

Given such a powerful message, one could be forgiven for thinking that this technology was enjoying explosive growth and sweeping the industry by storm. But alas this is not true. It is still in its early days, and only the most progressive of design companies have been using or evaluating this technology so far. In the parlance of Geoffrey Moore, this is a technology that has not yet crossed the chasm.

In a press release dated March 4th, Mentor Graphics proposed that a new Accellera standards committee be formed to investigate the standardization of a graph-based test specification standard. To underscore this endeavor, Mentor will make a technical donation of its existing graph-based test specification format to jump-start the standardization effort.

In response, the Accellera board proposed that a broader scope beyond graph-based testbench would be more beneficial. Instead, they approved the formation of a Portable Stimulus Specification Proposed Working Group (PWG) to study the validity and need for a portable stimulus specification. They have created a Web page to explain this further.

Why do we need another solution?
“There is no doubt we will need additional tools, languages and methodologies beyond what exists today,” says Mike Stellfox, technical leader of the Verification Solutions Architecture Team at Cadence. “The standard test bench languages and tools today, SystemVerilog and e, were designed for constrained-random, coverage-driven “bottom-up” verification of hardware and are best suited for IP and subsystem verification. These languages and tools that support them are not well-suited for SoC “top-down” use-case verification, where the DUT consists of both the hardware and software for controlling the hardware, so clearly there is room for innovation in this space.”

At issue is rising complexity, which is turning verification into a huge challenge. “The Universal Verification Methodology (UVM) seems to be working reasonably well at the IP level and for small chips or subsystems,” says Thomas L. Anderson, vice president of marketing for Breker Verification Systems, “but the gap occurs for full-chip verification of SoCs containing embedded processors, which the UVM does not address well at all.”

Even UVM doesn’t have universal buy-in. Dave Kelf, vice president of marketing for OneSpin Solutions, is not sure we have a firm footing with UVM. “It seems that SystemVerilog coupled with UVM, while considered the right direction for simulation-based flows, is simply too complex for engineers to get their heads around. No doubt a lot of this was due to its origins with multiple large companies pushing their own ideas and, possibly, over enthusiastic committee members with strong opinions not shared by others.”

The general industry voice appears to be that they are jaded by the verification methodology that is in place today and the way in which it has come about.

For SoC verification it appears that use-cases are the way in which many people want to describe system-level activities, and the ways in which they can interact. Given the lack of standards for describing them, most companies are forced to develop a set of directed tests that cover the system-level feature set.

Solution Benefits
In its proposal, Mentor outlined three benefits of graph-based test specifications:

  • Reduction in the time spent writing and debugging tests by 50% or more compared to SystemVerilog and UVM.
  • A test specification format that naturally supports multiple design languages and multiple verification environments, enabling re-use across both design context and verification engines.
  • The abstract nature of a graph-based test specification, which lets tool implementations execute the test specification in different ways according to verification requirements.

Breker’s Anderson supports those claims: “Graph-based scenario models are extensible in two dimensions. They enable IP-to-system vertical reuse, since any graphs developed for individual IP blocks can be directly instantiated into higher-level design blocks all the way up to the full-chip level. This provides much more reuse than testbenches, since UVM Verification Components (UVCs) and other testbench elements cannot simply be combined as the verification moved upward. Scenario models are also horizontally re-used across the course of an SoC project. The same model can automatically generate test cases for virtual prototypes, RTL simulation, simulation acceleration, in-circuit emulation (ICE), FPGA prototypes, and actual SoC silicon in the bring-up lab.”

That explains why a graph-based approach is gaining traction among verification experts. “The graph-based specification has proven infinitely superior to SystemVerilog constrained-random in automating the tracking and elimination of redundancy in the generation of sequences and scenarios from the specification,” explains Tom Fitzpatrick, verification technologist at Mentor Graphics.

An inherent advantage of graph-based specifications is that the user would have to write one less model than they do today—one model that is highly unintuitive for users, namely the coverage model. This is because a graph defines a scenario all the way from stimulus to outcome. The outcome, and the paths from input through the graph that satisfy that outcome, are the coverage model.

“For design functional coverage, no one has yet developed a solution that can correlate stimulus/test specifications to achieving functional coverage goals, says Mentor’s Fitzpatrick. “At best, we can associate which tests achieved which functional coverage points after the fact, but we cannot (yet) predict which test specifications will hit specific functional coverage points. “

“OneSpin has used graph-based techniques as part of its higher-level operational assertions, and it’s an effective way to describe scheduling and interactions at a reasonably abstract level,” says Kelf.

“Historically, the way to manage complexity is through abstractions,” explains Hemendra Talesara, verification technologist at Synapse Design. “Graphs allow a level of abstraction for test planning/coverage that is closer to specification. They allow us to capture the test space in a more abstract way and that enables increased automation.”

Solution Limitations
Cadence’s Stellfox is not so certain that these approaches solve all of the problems, though. “The concerns I have heard from customers who have evaluated graph-based approaches is that they have to rely on the engineers to think of and confirm the legality of all the required test scenarios. This is the age-old problem of directed testing, where for complex chips, it is very difficult for engineers to think of all the ways they need to stimulate the design. Auditing the legality of use cases still leaves a lot of manual work for the SoC Verification Engineer. A graph-based approach might give you a nice way to organize your directed tests, but at the end of the day, it does not address blind spots customers face when trying to create test scenarios for complex SoCs.”

Is now the time for standards?
At a DVCon panel, this question started a heated debate. Janick Bergeron, a Synopsys fellow said “a standard shows that we are getting maturity. It is too early for SoC.” Users struck back, saying this problem has existed for a long time and that EDA had been slow to address the need. But an established need does not imply that the solutions developed by the EDA industry are ready for standardization.

Breker is one company that has been developing graph-based verification solutions for a number of years. Breker’s Anderson advises caution. “Our graph specification format has been greatly simplified recently,” he says. “Had standardization happened a year ago, this high level of innovation would have been less likely and users today would find it harder to specify scenario models.”

Only a few companies have been shipping solutions using graph-based specifications so far, an indication that the technology is still young. Others agree that additional innovation may still be necessary. “It is not clear how extensible graph-based methods are, or if this is even the best approach to SoC verification,” says Cadence’s Stellfox. “More time is needed in the industry for innovation of solutions in this space before we create yet another standard for something that is really not proven or adopted widely in the industry.”

Stellfox notes that the best standards in our industry are based on approaches that were applied and proven by many customers on real projects. “Once a technology is proven, it makes sense to look at standardizing the language or other inputs required by the tool. I have not seen or heard of many customers saying that graph-based approaches have reached this level of success in the industry.”

Mentor’s Fitzpatrick sees it differently. “Not standardizing the format at this point will hold back innovation and prevent more users from enjoying the benefits of this method of specifying stimulus/test scenarios,” he says. “If we focus on the nits of syntax at this point, we’ll be lost within a sea of nuts and bolts and risk standardizing Frankenstein instead of a valued addition to the area of design and verification standards.”

OneSpin’s Kelf is a bit more forceful in his opinion: “We should give these techniques a chance in the market before ramming them down Accellera’s throat. Perhaps we should enter a requirements-gathering phase, something that used to be a part of all its standardization processes, taking input across the industry, not just its politically motivated members.”

“Standards do take time,” says Synapse Design’s Talesara. “Graph-based verification is proving itself in the marketplace. It would be best not to proliferate too many languages. Innovators would much rather spend their time and resources to build new capability into their product than work on standardization. It took some time to converge in the HVL space, and this can be shortened for graph language if we act now.”

Or as Kelf observed, “Much like good jokes, good standardization efforts are all about… timing.”

Get Involved
Most design companies do not get actively involved in the creation of the standards that they use. In many cases it is because the creation of the standards happen a long time before they are interested in using the technology, or do not have the time necessary to spend in the committee meetings. But now is a rare chance when they can have an impact. Accellera is having a launch meeting for this potential effort and your input is important. If you can attend the meeting in person, you should. If you cannot, let Matthew Ballance know how you feel. This is important input and it does not matter if you are in favor of, or against, the standardization process. Make your voice heard.

This meeting will be held Wednesday, May 7, from 10 a.m. to 4 p.m. Pacific time at the Mentor Graphics offices in Fremont, Calif. Lunch will be provided for attendees at the live event.

If you plan to attend this meeting, please RSVP including any dietary restrictions to:

Matthew Ballance – Mentor Graphics
[email protected]
Phone: (503) 685-1716
Mobile: (503) 481-7242