To maximize effectiveness there are a few things to keep in mind when implementing a UVM methodology.
By Ann Steffora Mutschler
The advent of advanced verification methodologies such as the UVM and its predecessors VMM and OVM has changed the verification landscape in many ways. Design and verification teams used to worry about simulator performance (i.e., how fast the simulator runs a particular test case), but the introduction of constrained-random stimulus and functional coverage and associated tools and techniques has allowed engineering teams to measure overall verification productivity.
“Users have long understood that if you can create a new test that achieves your coverage goals in half the time of your previous test, you’ve effectively doubled the speed of your simulator. This puts much more control into the user’s hands,” noted Tom Fitzpatrick, verification evangelist at Mentor Graphics. “Regardless of the methodology, creating a verification environment is a non-trivial task as these methodologies were developed in part to assist users by providing the infrastructure to support them in this effort.”
Where not to start
One of the first pieces of advice Janick Bergeron, verification fellow at Synopsys, gives to engineering teams is where not to start: Do not start with the base class specification because UVM is more than just the class library. “It’s more than just the source code that’s put out there. It has to be used the right way, the way that the base libraries were designed to be used. The same was true with VMM. It’s not because you use the UVM reporting mechanism that you suddenly are a UVM user. There’s more to it.”
Also, he tells users not to feel discouraged that the initial impression that they may get of UVM is that it is very complex. “That is true to a certain degree. However, those complexities are there for a reason. It’s the fruit of 10 or 15 years of advanced verification projects of people who’ve been there before. The people who are involved in the design know what’s needed, know what needs to be invested now today, what is necessary to make your life easier in the long run.”
As for where to start, Open-Silicon has found three things that are good practices when implementing UVM, according to senior engineer John Stiles. “The first is to build a company-wide base library that can be re-used from project to project. This base library will contain any company specific extensions to the standard UVM library and provides consistency from project to project. Each of the major UVM classes should be extended. This extension allows company-specific changes like changing the format of the logging information and adding common features to the base classes. Even if there is not currently a desire to change the features of a class it still should be extended to give a placeholder for possible future features.”
The second practice is to provide training to the employees that are new to UVM, he explained. “This training can be done via an in-house developed class or by an external vendor. UVM is large and complicated so it is important for new employees to start out on a solid footing. Besides training on the UVM methodology, training or documentation should be provided on any company specific changes to the UVM methodology.”
Finally, Stiles stressed, spend time creating a complete architecture of the testbench, which should be designed to fully support the testing of all of the required features of the design along with the needed sequences and coverage. “From the architecture an implementation plan should be developed that includes a number of steps to get from a basic testbench to the complete testbench that implements the full testbench architecture. Some possible steps along the way include a basic skeleton testbench that has all of the needed UVCs and other components connected, but has no real functionality; a version of the testbench to support initial RTL bring up that will drive stimulus, but may not be self-checking; and the final fully random testbench to support coverage closure and random simulation.”
Or simply put, “Stay true to the methodology, measure your test efficiency, and automate your debug,” offered Adam Sherer, product management director at Cadence.
Benefits for subsystem verification
When it comes to IP, given the rise in subsystem use, a standardized methodology is key to leverage reuse from IP vendors. Building a subsystem verification environment or SoC verification environment with multiple IPs is a task that can become cost effective if the IP was validated with UVM infrastructure, said Jack Browne, vice president of marketing at Sonics.
“UVM provides the necessary hooks to connect to IP specific ports or interfaces,” Browne said. “Passive monitors, scoreboard components and functional coverage modules can be re-used for SoC/subsystem verification. UVM Verification infrastructure used to perform functional verification of the IP can be re-used at the subsystem and also for SoC level verification,” he said.
UVM perks
Fitzpatrick pointed out that the UVM has two considerable advantages over its predecessors that make it more effective in achieving verification closure. “The first is that, as a standard, it’s supported by all major vendors. Users now have the ability to write their testbenches once and use that single piece of code to evaluate different verification platforms and tools from different vendors instead of being locked into a particular vendor through its proprietary methodology. This forces vendors to invest in developing tools and technologies to improve the verification flow.”
He noted that this emphasis on co-opetition has always been at the heart of standards efforts.
“The second advantage UVM has is that, in developing UVM, we were able to learn from others mistakes,” he said. “Thus, the UVM includes a superset of features found in earlier methodologies, and we’ve been able to integrate them in a coherent way. Since all components in UVM have similar methods and use-models, it’s pretty straightforward to assemble a basic environment and, more importantly, be able to understand an environment or component that you may have acquired from elsewhere.”
That does mean there’s a lot of functionality in UVM, Fitzpatrick admits. “However, the beauty of UVM is that its modularity lets you adopt it incrementally, rather than having to learn the entire thing before you can even get started. Two of the key features of any methodology are the ability to do constrained-random stimulus and functional coverage. While these features have been in the SystemVerilog language for years, UVM provides the infrastructure to make them easier to take advantage of, in part, by supporting the concept of transaction-level modeling (TLM). TLM lets the verification engineer think about the activity of the system as ‘transactions,’ which is a much more natural way of thinking about things than ‘pin wiggles.’”
Where to start
For engineering teams with an existing environment that want to move to UVM, the first thing to try is adding functional coverage. This is accomplished by attaching a UVM monitor component to the existing bus and using the monitor to convert the pin-level activity into transactions, he explained. UVM allows the transactions to be sent to a coverage collector that records the activity seen on the bus so the effectiveness of the stimulus can be determined.
While users often realize through this objective measurement of their functional coverage that their stimulus isn’t as effective as they thought, it’s relatively straightforward to use UVM sequences to specify transaction-level constrained-random stimulus to drive simulation, Fitzpatrick said. “The flexibility of UVM lets you create many different sequences that can be run in your environment, and use your functional coverage to determine the most effective sequences. All of this can be done without having to delve too far into the use of object-oriented inheritance or the more advanced features of UVM like configuration and the factory.”
These features round out the verification environment, allowing for randomization in the structure of the testbench as well as the data that gets generated, which can be thought of as another layer added at the test layer on top of the existing UVM environment.
“A little bit of forethought in adopting the first few pieces of UVM will give you the foundation for adding this additional flexibility as you learn more about UVM,” he pointed out. “So, with UVM you get an industry standard library that simplifies the task of creating modular, reusable, transaction-level environments where you create constrained-random sequences of stimulus and measure their effectiveness using functional coverage. You write your code once, and you now have the ability to evaluate tools and technologies from multiple sources to take full advantage of your investment in developing the environment in the first place.”
Community is key
Cadence’s Sherer said that where the guiding principles put an engineering team on the path of more effective UVM verification, it’s the vibrant community that reinforces those principles.
He suggested that the best place to mingle with that community is the Accellera System Initiative’s UVMWorld.org and its forums where industry leaders answer questions ranging from basic how-to through complex, task-specific ones within hours by. There are also UVM videos in separate commercial forums.
“So, follow the methodology, measure your efficiency, and automate your debug as a member of the community to maximize UVM effectiveness,” he concluded.
Resources:
www.accellera.org
www.uvmworld.org
www.verificationacademy.com
Leave a Reply