It’s time to move up in abstraction again as a complexity overwhelms a key approach.
For the past decade or so, the Universal Verification Methodology (UVM) has been the de facto verification methodology supported by the entire EDA industry. But as chips become more heterogeneous, more complex, and significantly larger, UVM is running out of steam.
Consensus is building that some fundamental changes are required, moving tools up a level of abstraction and making them more agnostic about different architectures. Those tools need to be able to support the generation of constrained random, graph-based, directed or coverage-driven Portable Stimulus so that tests can be re-used in simulation, prototyping and emulation.
When a piece of code is written in C, C++ or any other high-level language, the user is not really concerned about the underlying architecture or instruction-set architecture (ISA) the program is finally going to run on. “Similarly, an architecture-agnostic test framework allows the stimulus to be developed at a higher level of abstraction so that the verification engineer can focus on the use case instead of worrying about the exact ISA semantics to use,” said Shubhodeep Roy Choudhury, CEO of Valtrix Systems.
Architecture agnosticism is particularly helpful when it comes to use-case verification, as most of the processor or SoC components, such as memory management units, pipelines and floating point, have well-defined use cases associated with them.
“In the case of CPU caches, for example, most of the interesting use cases from a verification point of view are quite standard — filling up the cache, creating volume evictions, etc. The traffic patterns to exercise these scenarios can be achieved easily by memory accesses guided by high-level algorithms. The goal of architecture agnostic verification is to allow development of those algorithms at a level of abstraction that allows maximum reuse of stimulus across different instruction set architectures,” Roy Choudhury explained.
Raising the level of abstraction crops up as a topic of discussion among verification engineers whenever complexity outstrips the capabilities of tools.
“Verification always tries to rise up to agnosticism, and for any new verification project it always starts in the weeds,” said Neil Hand, director of marketing for the Design Verification Technology Division at Mentor, a Siemens Business. “It then goes to higher and higher levels. An example of where it’s worked up until now is VIP for interfaces. No one knows how that interface is implemented. It’s completely irrelevant. It’s the same when it comes to the CPU. If it’s RISC-V, the discussion centers around compliance, which doesn’t care about the architecture implementation. It’s all fairly generic. The same is true for any SoC where there are many different processor architectures under the hood, but you verify them as they are. If you go one step higher, when you do algorithmic verification using hardware/software co-design, what are we trying to do? We’re running software on the underlying processor. We’re measuring performance. We’re measuring the input and the output, if it works — how it’s working under the hood becomes relatively irrelevant. Yes, you’ve got to have the models and you’ve got to build all this stuff, but the agnosticism is a natural place where verification wants to go, because when you get to that higher level, it’s the least amount of work for the widest possible market. So everything tries to get to that place. However, it takes time and energy and understanding to get to that place.”
Generally speaking, the major benefits of architecture agnostic verification are as follows:
One area that may be difficult is the conversion of high-level stimulus into the final instruction stream, which may not be optimal because of features missing between architectures, Roy Choudhury noted.
Also, when verifying hardware, the industry-standard approach is to define functional tests and to look at the code coverage. “Looking at functional coverage, there are inevitably dependencies on the architecture that you are using,” said Roddy Urquhart, senior marketing director at Codasip. “If your test plan does not adequately test out your unique architecture, you are dismally failing to verify your design. You need to understand not only what is normal behavior for your design, but also what could break it. With code coverage this is inevitably architecture agnostic. You want to find every possible way to achieve your coverage goals. Therefore, to achieve high code coverage, it would be absolutely sensible to combine constrained random and coverage-driven methods. If there are parts of the code that are hard to cover with these methods, then specific directed tests are common sense.”
UVM’s role
UVM, meanwhile, is very good at data randomization.
“That’s one of the things that really started the concept of data randomization and the coverage associated with that,” said Bernie DeLay, group director, verification R&D at Synopsys. “Those go hand in hand. What is difficult to do with UVM is generating scenarios, such as the ability to have UVM sequences and control what order they’re going in, what control flow and all of that, which is difficult to do. Portable Stimulus is trying to address that difficulty. It’s so much easier in a graph-based language to describe control flow, such as what happens sequentially, what happens in parallel, what actions can you select from. It’s a more natural way of doing it, and with a lot less effort. If I was going to try to do that in UVM and I’m writing some pretty complex constraints, I’d probably end up doing some of it directed in order to achieve that same thing.”
But UVM is losing steam as complexity and heterogeneity increase. “It’s really difficult to write that variance and control flow,” DeLay said. “We’re seeing more and more of that. The number of I/Os and the associated peripherals on an SoC makes that even more difficult in UVM. And then, stimulus and coverage go hand in hand. It’s the same there. Basically, if I tried to write coverage to get an idea of the scenarios I generated — we write coverage for data objects — there’s nothing in SystemVerilog writing coverage to ask, ‘Did I have this control flow sequence? Did it actually happen?’ In something like PSS, either in the language or hopefully in the tools that are implementing it, getting coverage — either that the user wrote or automated from the tool to understand the interactions/the activities in your test — is much easier.”
Darko Tomusilovic, lead verification engineer at Vtool, noted that for years, most problems could be solved with UVM. This is no longer the case.
“We had a project in which UVM proved to be really not efficient because the design itself was split into many features that were not related to each other,” Tomusilovic said. “In the end, we decided to re-invent the whole thing and we simply kept using customized components per feature. So rather than relying on a common agent monitor/driver/scoreboard infrastructure, we simply had one agent for that feature, and that agent combined the stimuli generation, as well as the monitoring, as well as the tracker. What was very efficient there, it was localized, and all the code was in one place. When you have a scoreboard that contains a lot of different logic, it can be quite confusing, quite difficult to read. And then you need multiple engineers maintaining the same file, so it becomes a mess in complex projects. Here, we basically split everything per feature in what we call the feature agent, and then we have one developer who is responsible to verify a feature. They will maintain this agent, and that’s it. So even though it was completely against UVM methodology for us, it was really efficient and very useful, very easy to split tasks. While UVM is a good starting point guideline, there are so many places already where it simply does not fit and you need to extend it so much. For low-power state machine modeling and verification you need something else, and there it makes sense to call it a guideline. It’s a good reference. But in the end, every product will contain so much customization that I don’t really think of calling it a methodology.”
Legacy code issues
Another challenge with UVM today is legacy code. When implementing a new verification project, how is legacy code incorporated?
This is a big issue for engineering teams today especially if the legacy code lives in non-standards, Mentor’s Hand stressed. “The reason we embrace standards so enthusiastically is because it at least partially addresses this issue. If you have legacy code that is in a proprietary language, that limits your ability to bring in that legacy into a new environment, or it limits that environment to a particular vendor, which is really not desirable. So, generally speaking, we try to drive standardization. That’s why we put the inFact language into Accellera for the Portable Stimulus effort. It’s why we did OVM into UVM. It’s why we gave our low-power language to UPF. You want to avoid that legacy, because right now, if you have legacy UVM you want to bring into a Portable Stimulus world, we can do that really easily and we can build that environment. If you have legacy Verilog or SystemVerilog that you want to bring in, we can do that really easily. For the majority of engineering teams, the only legacy issue I see from time to time is e. All of the other languages were standards-based and have a path. You can trace a path from Verilog to SystemVerilog to UVM to Portable Stimulus. There is a pathway that allows the levels of abstraction and you occasionally get little side paths. We had them in the past with Vera, we had them with e, we’ve had them with SuperLog, the Verilog subset. And for the most part, if you happen to bet on the wrong course, there is a cost of translation, there is a cost of moving people across.”
To Tomusilovic, the first challenge with legacy code is that it’s very hard to justify throwing out the old legacy code. “They already threw it out once, and put it inside UVM. It’s very difficult to justify throwing out UVM to put in something new. The second challenge, even though UVM has been in place for almost a decade, is there isn’t a project that was done 100% according to UVM. As soon as you have something a bit out of the box, it makes things much more difficult.”
For example, take a simple SoC project. “If you have a processor that will execute some code, there is not really a standard methodology for how to preload this processor, how to follow its code, how to communicate with the rest of the testbench, and what to do immediately,” Tomusilovic said. “The core methodology basically breaks because in every project. You will have a bunch of different monitors/components to trace the messaging, to communicate with some kind of memory mailbox, and this is simply not standardized. And as soon as you have that case, it will basically break. Another situation in which the methodology breaks is the register modeling, where UVM does define a very good and perfect register model that works for very simple interfaces. But as soon as you have a bit more complex register interface that supports burst, or which supports some more complex things such as low-power modeling, everything breaks. You need to re-invent your own methodology to add your own layer around UVM somehow, to figure out how to make it work. Everybody does this, and does it in a different way. I believe that in almost every project, there is such a case. And everybody, even though the starting point is the same, takes some different directions and makes their own version of UVM. Therefore, even though we started with a methodology, it’s very easy to diverge.”
Still, very little is actually replaced. “You’re not throwing away your underlying UVM environment in a Portable Stimulus solution,” said DeLay. “You’re still probably going to want to do your main data randomization and things at the low level in UVM. This is really controlling the UVM sequences themselves and what order they’re happening. Some people think they’re going throw all that UVM stuff away. That’s not what you’re trying to do here. You want to leverage that work you’ve already done in the environment, the checking, and the base sequences, and just use it better, more easily. As a structure, that’s where the portability comes into play, because you have that same type of structure if you go over into an embedded environment. Instead of a UVM component or UVM VIP, you’re just replacing it with a transactor or with the actual RTL that you’re controlling from the embedded side. What I want to re-use is that stuff that’s controlling how things happen, not the underlying stuff on it, and if I can re-use that then it’s going to reduce the effort going from one to the other. I don’t know how many people I’ve talked to that say, ‘I wrote this in SystemVerilog. Now I rewrote it in C when I went to the SoC level. And then if you talk to the DV guys in silicon, I don’t know why they do this but they tend to rewrite that C code one more time. Basically, you’re going to end up re-doing that same initialization sequence three times. It’s error-prone, and every time you end up introducing new bugs along the way.”
Practically speaking, this is really about a new generation of verification tools.
Again, this leads to the next level of abstraction, just like when the industry went from SystemVerilog to UVM to address the things in UVM that can’t be done, such as easily going from block to subsystem to system on an emulator or on a virtual platform. “With Portable Stimulus, while we’re not there yet, the aim is that same scenario can work at every level, whether you’re running on a digital simulator, a virtual prototype, or on an emulator or a prototyping system. It all just works,” Hand said.
Toward better verification
Still, fundamental issues remain. “If I’m starting up a new verification project, the first step is actually breaking it down more from the perspective of how I want to verify, what’s the best way to verify a block or a number of blocks or a portion,” said DeLay. “If you start at the top level, Portable Stimulus may be one way to do something. For other things, maybe it’s formal. I won’t use Portable Stimulus to do things that should be done statically, so a top-level verification mindset to begin with is the first step in methodology. Methodology is more than just UVM. That’s a very specific methodology for that type of problem. Verification methodology starts at a much higher level.”
His suggestion is to break down everything you want to verify, or that needs to be better verified, then decide what’s the proper tool or sub-methodology to achieve that, because there are often various ways to achieve the same thing. “Formal is a good example,” he said. “For certain blocks, I would not recommend simulation. You’re really much better off doing your datapath things formally, so it starts at a much higher level from a verification methodology perspective in deciding what the proper tool is to do that particular path.”
Better verification starts with what you really need to verify, not trying to fit into UVM. “You need to build some environment, which will help you actually efficiently verify this chip, and then only in the second stage should you try to see if and how it fits standard UVM components. If UVM can help you, perfect. It’s a great tool. It can help a lot. But it can only start to be applied after you have a well-defined infrastructure,” Tomusilovic said.
If this is not defined correctly at the beginning, it can have a huge impact on the project and additional time will be required to fix that. “In some cases, you don’t have time and in some cases it’s even better to delete everything and start from scratch if it’s not planned correctly at the beginning,” observed Olivera Stojanovic, senior verification engineer at Vtool.
As people look to what’s next in verification, at the natural, higher levels of abstraction and agnosticism, Mentor’s Hand believes that using a standards-based approach is preferable to a proprietary solution. “If it’s not, do it with open eyes and know it may lock you into that solution or that you have to migrate later on. There are cases where people say, ‘This new technology is so awesome, I’m willing to take the risk.’ Right now, we’re in a reasonably comfortable place that the next level of abstraction that we’re looking at is Portable Stimulus. We’re looking at doing that, even for the existing things like VIP that are agnostic already. But if we can make it in Portable Stimulus, it means it runs everywhere.”
Conclusion
Still, nothing is guaranteed. The adoption of Portable Stimulus may take a bit longer to be adopted than some might like, given the COVID-19 pandemic, which is impacting technology adoption.
“The general sense of uncertainty has meant that new technologies are not being adopted as quickly,” said Hand. “For engineering groups that were considering Portable Stimulus, they are saying they love the promise of it but they’re going to stick with UVM. They know what it means for the project schedules, and they know what that means for the work they’re doing.”
He likened the situation to the dot-com bust, when formal verification was trying to get off the ground. Because of the uncertainty of the global recession, customers chose to invest in maintaining staffs instead of investing in new technology. Formal verification subsequently took many more years to gain traction.
Whether that is truly the case with Portable Stimulus remains to be seen, but there is enough creativity and engineering ingenuity to chart the verification methodology path forward for the industry as a whole, and within discrete engineering organizations. Its evolution may just take a bit longer than expected.
I always thought UVM’s raison d’être was that constrained random testing sells a lot of simulation licenses. UVM sucks as a verification methodology for digital, formal and semiformal methods work a lot better, and it should all be correct-by-construction anyway. CR is only really useful for finding bugs in more analog areas like power control, and only if you use analog simulators rather than digital.
I never got the point of Portable Stimulus.
Randomization doesn’t need UVM. Just system verilog classes, as well