Systems & Design
SPONSOR BLOG

What Is Portable Stimulus?

The standard for verification intent modeling has a misleading name. It should be changed.

popularity

When Accellera first formed the and gave it that name, I was highly concerned. I expressed my frustration that the name, while fitting with what most people thought verification is about, does not reflect the true nature of the standard being worked on. In short, it is not a language that defines stimulus, and the stimulus that is created by a tool that reads the standard is not portable. I was told that the name had been set and that is what it would be known as.

Since then, I have continued to warn people about the problems with the name and the confusion it will cause. At DVCon this year, there was a tutorial on the emerging standard, which was packed. A lot of people realize that this is getting close and it will likely have a significant impact on their verification methodology. However, a significant portion of the questions were because people were confused by what the standard was doing, even after a significant portion of the tutorial had been given.

So what should it be called? Quite simply, the standard is defining a model of verification intent. The best way to think about it is to consider the design flow. The model that most people use is Register Transfer Level (RTL), and this is implemented in two languages, Verilog and VHDL. RTL should define a set of semantics, but unfortunately, these were never defined in a mathematical manner. Thus, Verilog and VHDL almost (but not entirely) implement something that looks a little bit like RTL. That RTL is a model, and a set of tools then work on that model to produce an implementation. The most obvious tool is synthesis, but other tools also manipulate the model, such as .

Now back to verification. What we are defining is a set of semantics for a Verification Intent Model (I will start calling it VIM), and from that model tools will be able to synthesize testbenches that could target any number of execution engines. The standard does not define the tools or anything about them, so vendors can compete on the quality of the testbenches they produce, and those can be optimized for simulation or emulation or any other verification target.

The model captures the intended behaviors of the design in a way that complete testbenches can be generated, including stimulus and checkers. The generated testbench could either drive UVM models, or could generate code that executes on the embedded processors or a combination of both. Results of the run can be annotated onto the model for notions of system-level coverage. And the creation of those testbenches also includes notions of randomization, so nothing is lost in terms of methodology.

It is also unfortunate that the committee has started to define two syntaxes to implement that model rather than ensuring the semantics are clean. With clean semantics, the vendors also could compete on syntax, and they could attempt to define the best language and flow. Users would still have assured model portability because the single semantic model would underlie all of them. However, EDA again thinks it can create languages better than the rest of the industry, and so that is providing a distraction.

It is not too late to change the name. The standard has not been released yet and it would be better to do a name change now rather than have the confusion about it slow adoption. This is the biggest change that has happened to verification since the first verification languages were created, and it is the first time that we have not concentrated on stimulus. While important, it is just one part of verification, and we really need to focus on verification as an entity. The Verification Intent Model will do that for us. I hope the Accellera board and the committee can bury their pride and do the right thing when they release the VIM standard.



Leave a Reply


(Note: This name will be displayed publicly)