OVM vs. VMM: What’s Next?

The biggest names in verification have figured out some possible solutions to incompatibility. After all, it’s in their best interest.

popularity

By Ed Sperling

The lines are drawn. On one side stand Mentor Graphics and Cadence. On the other are Synopsys and ARM. And caught in the middle are verification engineers, with a preference for one or the other and often in mixed verification teams.

The battle for dominance between the Verification Methodology Language (VMM) and the Open Verification Methodology (Open Verification Methodology) began humbly enough. VMM was developed by ARM and Synopsys and released in 2007, while OVM—combining Mentor Graphics’ Advanced Verification Methdology and some of Cadence’s Universal Reuse Methodology (URM)—was released early this year.

And there it might have remained, with the battle lines drawn, except for a couple of developments. First, rising complexity and the painful process of testing and verifying chips became so onerous at 90nm and beyond that engineers began clamoring for much better tools that could work with any simulator—and work together. Second, Mentor and Cadence threw their technology into the open source world with Accellera, which put further pressure on ARM and Synopsys to provide at least some compatibility.

Three ways to cooperate

The big problem is that some companies adopted VMM, while others adopted OVM. And still others are waiting on the sidelines, trying to figure out if there will be peace between these two worlds before jumping on one side or the other. From a vendor perspective, this is the worst of both worlds. It’s like trying to figure out if Blu-ray or HD-DVD would emerge victorious before plunking down money to buy a new device. In this case, the money being paid by users is for training and tools. And to make matters worse, because of the high demand for verification engineers, job mobility often pulls together teams of engineers with conflicting skill sets.

So earlier this year, a working group within Accellera began looking at how to bridge the methodologies and base classes. So far, according to sources, there are three possibilities for interoperability.

  1. Bridge the environment. One of the differences between the two test environments is that each talks to a different interface of the Design Under Test (DUT). Compilers can be created to map the differences in each and translate code, almost the way an application binary interface does for software applications, or it can do a one-to-one conversion, which is much more difficult. A decade ago, Sun Microsystems used to write its Unix code in one direction and most other vendors wrote it in the other, a disagreement that became known as Big Endian-Little Endian, based upon which side of the egg was cracked open. Software code was developed to create binary one-to-one mapping of the operating system, which was deemed faster than a translation layer. That was considered difficult then. With OVM and VMM, it will be orders of magnitude more difficult.
  2. Match the data types. Both VMM and OVM send test bench results to a common scoreboard, but each has a different perception of what comprises a transaction, a difference fostered by subtle differences in the data types. The solution may be converting the data types to a common one.
  3. Wrap the code. One solution to these types of incompatibilities has always been putting a software wrapper around the base classes. While this is complex, and potentially has the highest performance overhead, it also provides the most flexibility because it creates what is essentially a layer of middleware that can be augmented or corrected at a later date.

Which approach wins, or whether elements of all three are used, remains to be seen. One source deeply familiar with the problem concludes that “the differences between the methodologies make automatic mapping via a compiler difficult. A wrapping approach is much more technically feasible and will be more reliable. The fact that both methodologies are written in the same language makes the wrapping solution a purely technical issue that can be implemented in a relatively straightforward manner.”

Nevertheless, industry insiders say pressure is high to fix this problem so that verification time can be reduced.

The next battleground

Assuming that peace does reign at the base class and methodology level, the real battle will move up a notch. And from the standpoint of the vendors selling tools on top of these base classes, as well as the engineers using them, this is a very good thing.

Swami Venkat, senior marketing director at Synopsys, said his company and ARM were filling a need among verification engineers when they created VMM. “It was so popular that we did a Japanese and Chinese version,” he said. “We also got a lot of feedback about what we can do over and above what we created. Customers want us to innovate more with VMM, and VMM LP, so they can find low-power-related bugs.”

Synopsys did turn over its base classes and methodology to Accellera last spring. It kept the applications that run on top of VMM in-house, which is where the business proposition for Synopsys and ARM really lies. “For the user, the base class is a good starting point,” Venkat said. “Ultimately, what will find bugs are the test benches and bug-finding applications. With simulators, what users are looking for is performance, the ability to find corner bugs, coverage and debug capabilities.”

Whose approach is better is a matter of debate. Tom Fitzpatrick, verification technologist at Mentor Graphics, insists that VMM was first out the door but more proprietary. “It was limited by the amount of System Verilog it could support,” Fitzpatrick said. “OSCI (the Open SystemC Initiative) hadn’t come up with TLM yet. VMM has a transaction-level interface, but it’s proprietary. When we developed AVM we knew what was going on in OSCI.”

Reading between the lines, Mentor and Cadence were counting on the fact that it would take time to adopt System Verilog and VMM or OVM. They were right, although at least part of the delay was because it takes time to assess new tools in incredibly complex environment. Widespread adoption is a slow process.

As a reference point, building verification assertions into designs has been talked about for the past 15 years. Reducing the time it takes to verify a chip has always made sense. But infusing that into the chip design and development process is extremely slow, and success is limited.

For further reading:

Alliance and Interoperability Programs Enable Future Tool Flows

Vanguard Program Model Strengthens Verification Portion of ESL Design

The VMM Methodology: The Foundation for Verification Success

A Truly Open Verification Methodology

First Book Published on the Open Verification Methodology

[Technology Book Review ] Metric-Driven Design Verification


Tags:

Leave a Reply


(Note: This name will be displayed publicly)