The New Verification Landscape

Putting together a methodology is more than just putting tools together.


By Ann Steffora Mutschler
Verification technologies and tools have never been more sophisticated. But putting together a methodology is more than just putting tools together. It starts with trying to get a handle on the complexity, knowing what to test, how to test and when.

“UVM was standardized and people have been working to adopt that which has been generally a positive,” said Steve Bailey, marketing director at Mentor Graphics, “because it took what people were doing using proprietary languages and methodologies and essentially standardized around SystemVerilog and a standard library set—with guidance on how to use that library set to build verification requirements and testbenches.”

While that’s been successful, he believes the biggest productivity gains for verification will come from extending the methodology beyond the constrained random and functional coverage used with UVM and SystemVerilog. The key is allowing a lot of the tests to be automatically generated from a higher-level specification instead of just doing directed testing, and then allowing tracking for what exactly was tested through the functional coverage.

Because designs are so complex today—especially once past the block level into subsystems and full chip—it is impossible for engineering teams to complete the level of verification that they need to really feel comfortable, Bailey noted. “So what used to be a lot of verification cycles, some number of which were redundant (and you had a hard time figuring out how many were redundant because of the constrained random methodology), people are looking at to make sure they maximize verification cycles to achieve more. They are looking at reuse even more extensively than what people were able to achieve before with the verification methodologies including UVM.”

From a similar perspective, Michael Sanie, senior director of verification marketing at Synopsys said, “It’s extremely challenging, and if you compare it to what happens on the chip design side, that’s also very tricky, very difficult, but it’s a little more straightforward (for lack of a better term). There, you know what the right process is. First, you do this, then you create something, then you do some checking and run something else. The process is pretty much the same—you always run synthesis first, then timing, then place and route, more timing, etc. Each step is very difficult, but what comes after next is established. In verification, what comes after next is a hodgepodge—and not in a bad way. It’s just that everybody has a different set of experiences, different philosophy about things, and different ways of using tools.”

And there are a lot of tools—simulation, emulation, formal tools, debug technologies. “How do you put them all together,” asks Sanie. “Then you have a layer of methodology which goes along with it, and there is no one-fit-all. It’s dependent on the size of the team, the budget, the timeline in terms of how much verification time you have in your design cycle, what vertical you are in—mobile vs. CPU. There are different metrics. There are so many questions that mainly come from the fact that verification itself is unbound. No matter how much you do, there is more to be done.”

In terms of UVM, John Brennan, product manager for verification planning and management at Cadence noted, “Now that testbench languages have sort of solidified, people aren’t talking about UVM anymore because all the vendors are supporting it and it’s become the de facto standard. Whether you’re using the Specman version of UVM, the e variation or the SystemVerilog variation or the System C variation—it doesn’t matter. It’s all the same. UVM now encompasses a multi-language approach, but everyone has adopted UVM so the issues are not with the testbench anymore, they’re not at the simulator anymore. More simulation doesn’t do you much good. What you really need to focus on is verification. Verification encapsulates methodology and encapsulates best practices, such that you can get to that next level of productivity.”

As UVM gets adopted and solidified and well used within the engineering environments, users are all looking for the next thing. “The next thing is the encapsulated productivity associated with methodology, and methodology can’t just be some loose description of what to do. It has to be codified in a tool and it has to be prescriptive. You can’t be left to wander off in the weeds and find your own path. You have to say, ‘Here’s how others have successfully done it in the past, and here’s how I can successfully do it in my own project,” he pointed out.

Here, risk assessment is key. That includes a deep understanding of what to verify, why to verify and how to verify it. “At the end of the day, you want data-driven decision making,” Brennan asserted.

To illustrate the complexity of verification tasks, the following figure shows an example chip design flow with some of the key milestones and dependencies. The bottom indicates some of the major project steps as averaged from an analysis of 12 projects.

(Source: Cadence Design Systems)

“Doing this by hand all the time is causing a lot more errors, so that’s where this whole notion of automatically creating that top-level of the design and then attaching the verification environment to it becomes more and more important,” said Frank Schirrmeister, group director, product marketing, system development suite at Cadence.

The Tao of verification
To develop tools for the verification domain, a certain philosophy must be adopted, asserted Pranav Ashar, chief technology officer at Real Intent. “Our premise as we approach this domain is that an SoC today—the entirety of it, the full complexity of it— is far removed from what a single individual or even a small group of people involved in the process of creating that is able to grasp. The system is much bigger than a small group of people do in contributing to this. It’s almost more like a jetliner rather than a small widget. The verification always has to approach this problem in a systematic manner, taking this into account. You have to be very methodical in terms of doing whatever you can early, and doing things in a systematic manner and creating a methodology where a small verification is the aggregate of the individual verification efforts intended to check a part of the system or a part of the process of developing the system. That’s a basic premise that you have to have in the back of your mind.”

Once that picture is in mind, he continued, “it’s becoming very true today that SoC verification is more about verifying the system integration and the system management more than the verification of the individual components, based upon how the things are put together and how the system is being managed. That really pushes the envelope in terms of verification requirements.”

This knowledge is deeply connected with ideas about how to improve verification overall. And it all starts with improving automation, Mentor’s Bailey said.

A very promising area to improve verification is analysis of the verification and understanding which tests are relevant and which aren’t.

One approach is to combine  with simulation results so the formal tool looks at the design, looks at the metrics and looks at the whole to determine if coverage is possible, he explained. “If it is reachable, a waveform is created to make it easier for the user to then go and create a directed test or tweak the constraints to achieve the conditions to hit the coverage. This not only saves time, but should create a higher level of confidence to the manager that he’s not waving off on an exclusion because an overworked engineer simply doesn’t have time and made a mistake.”

Then at the chip level where the engineering team is trying to be more efficient, productive and effective, they need greater visibility into what is happening. Here, Mentor Graphics and others are providing advanced testbench automation that operates at the chip or SoC level to understand the system-level scenarios, which in turn help the engineer better understand the test specification that was defined. “Then we are working to combine that with a better understanding of what’s happening at a system level, with first efforts targeted to the interconnect and coherency areas along with the conditions,” Bailey noted.

The new verification landscape is anything but straightforward. Instead, it combines the necessity for a deep understanding of what is effective verification with systematic approaches to verification and test, the intricacies of the connections between aspects of the system, and the best ways to verify interactions. And all of that needs to include an overall plan for tracking productivity and reuse. It’s anything but simple, but opportunities abound for the companies that get it right the first time.

Leave a Reply

(Note: This name will be displayed publicly)