Abstract Verification

The industry lacks a methodology for effectively using abstraction in verification, but that may be changing.

popularity

Verification relies on a separation of concerns. Otherwise the task has no end. Sometimes we do it without thinking, but as an industry, we have never managed to fully define it such that it can become an accepted and trusted methodology. This becomes particularly true when we bring abstraction into the picture.

A virtual prototype is meant to be true to behavior, but there could be timing differences between it and a lower-level abstraction, such as RTL. Those timing differences appear because of implementation details, such as the microarchitecture of a processor, its pipeline, and how things like interrupts are implemented. None of that invalidates the virtual prototype so long as the RTL does not add or remove any behaviors. To modify behavior would mean that the RTL actually has a bug. We can go further than that and say that every possible implementation of the RTL must conform to the behavioral model.

In an idealized world, verification is meant to be independent of design, meaning that the testbench can verify any implementation of a design. In reality, there are many exceptions to this because doing everything in a black box manner is not only more difficult, but more time-consuming. Portable Stimulus (PSS) is interesting because it takes a halfway position on this. The testbench knows about the datapaths that exist in the implementation, but not the specifics about them. It does not know their implementation and also cannot, in most cases, even infer if they utilize a shared communications structure, although it is possible to embed this knowledge into the model.

There is a problem utilizing a virtual prototype for verification. Because we cannot properly ascertain what is being verified, it becomes difficult to know what can be ticked off as having been accomplished by each run. Has the behavior of the software been verified? If timing changes, will it impact the act of verification? It could almost certainly take a different path after timing changes the outcome of something, and that is part of the problem. Verification relies on a test always verifying the same thing, and without that, closing coverage could become even more difficult.

There are two sides to this coin. With RTL simulation, every time we perform a simulation with a specific testbench, we get the same results. Even if there are two concurrent inputs, which in reality should result in a 50/50 chance of each being “seen” first, the simulator will always see the same one first. This is somewhat problematic in that it means the simulator selects one of several possible paths and masks the rest. We can argue that if this was important, it would result in a coverage hole and someone would find out how to adjust the testbench to allow the other orders to be selected.

How do we get around the virtual prototype dilemma? This is where the industry has failed to come up with an answer. The industry falls back on a statistical approach by saying that you should rerun a certain percentage of the tests that were run on the virtual prototype on an emulator or prototyping system. It is unlikely that a simulator would be useful in this regard because any reasonable test for a virtual prototype would probably take an excessive time on a simulator.

Portable Stimulus may provide the answer for us. When a test is created by the test synthesis engine, it knows all of the dependencies between tasks and how and when external stimulus is going to be applied. It takes all of the absolute timing out of the testbench, which is what initiates the problem. Thus, it doesn’t matter if a task takes a little longer when run on an implementation, or even in real silicon, the model and the manner in which the tests are synthesized means that they will always provide the same coverage in terms of the testbench model. It is possible that implementation level coverage may change because of local timing differences, but the coverage of the graph as defined by the PSS model would remain the same.

It is unfortunate that the Accellera committee working on PSS has not yet managed to standardize the PSS coverage model, instead relying on extension to the existing notions of implementation coverage that exist in SystemVerilog. Having a graph-based model would enable the verification on a virtual prototype to become a well-defined part of the flow and reduce the amount of repetition that would be required across the abstractions. None of this removes the need for independent block-level verification, nor the notions of SystemVerilog-type coverage, but it would pave the way to a meaningful role for abstraction in the verification flow.

I will admit that I am far from a Portable Stimulus expert, so I may be wrong on this. I would love to hear other people’s opinions.



1 comments

Gaurav Jalan says:

Hi Brian,

While PSS tries to address part of the problem, I believe the deeper or root cause also lies when we transform the following statement to real world – “In an idealized world, verification is meant to be independent of design, meaning that the testbench can verify any implementation of a design.” For that to happen, the document reflecting the design/intent needs to be complete. In reality we miss it!!! Another issue is effective usage of a standard like UVM. We talk about long simulation time but there is no data that suggests that simulation time is also getting burdened with incorrect usage of UVM. It is not the design or the stimulus but the testbench that is adding to the woes. Unfortunately for verification abstraction across industry seems to be distant!!!

Leave a Reply


(Note: This name will be displayed publicly)