The Meaning Of Verification

A change in design philosophy means that verification has to adapt, but it is not going to be an easy path to get there.


When I ask the question “Why do we do verification?” there are generally two types of responses. One of them sees the glass as half empty and the other as half full. It depends upon how you look at the problem and if you see verification as being a positive or negative operation.

The negative answer is that we do verification to find bugs. This relies on the mechanical function of creating vectors with the expectation that it unveils an error made by a designer. Managers may track bugs per week and use this as a measure for how stable a design has become. Metrics can be created to show how much of the design has been covered. Those metrics usually concentrate on providing coverage of the implementation and so there can be the danger that it is difficult to locate functionality that is missing.

The positive answer is that we do verification to provide confidence that a design operates as intended. Bugs found are a consequence of that operation. Confidence has been a lot harder to measure. It generally relies on being able to define what a system should do. And while this sounds like a model of the hardware, it is both more and less than that.

It is more because it defines intent that may be a combination of both hardware and software. It says that a system should be capable of doing XYZ. It is less because it does not define how it should do it, only that it should be capable of doing it. In many systems there may be multiple ways in which that function can be performed and it may be important for either one way or all ways to be verified. Few metrics exist for this today because the ability to define such a model is only now emerging, but this is an important aspect of the Accellera Portable Stimulus work.

Non-functional requirements are also becoming more important these days. Performance has always been an issue, but there are few aspects in existing verification solutions that make this easy. It would be nice to able to say that throughput should be X bytes per second with a deviation of Y. When a situation occurs that causes it to move outside of a range, an error should be given and at the end of a run, the distribution of results should be collated and reported.

Power is another of these soft requirements because it is easy to miss what may be a flag for excess power consumption. Emulators do actually have an element of this built in for power analysis where they keep track of toggle counts or other aggregated activity to allow a map of activity to be provided over time. Simulators have never thought that this type of statistic is worth recording and reporting.

Safety and security are adding to the role of verification and these will also require new types of confidence builders. For systems that mandate it today, they rely on extensive paperwork and analysis.

The reality is that all verification has an element of both the positive and negative, but historically the negative aspects have been more dominant. In my opinion, this is because verification was more contained. While never complete, it was possible to think of approaching the asymptote of the problem and it was possible to say that we did the best we could in the time and money available.

Today, I don’t think we can get anywhere close to that point in the curve and thus the importance of being able to prove that a system does what is necessary is becoming more important. Systems are also becoming differentiated by taking on a more custom design approach. When you buy a processor core from Arm, you expect it to work and do not expect to have to do any verification related to that. Similarly, with your communications systems and the other IP blocks that you get from 3rd parties.

But that is leading to a commoditization of products when you can’t keep economically moving to smaller geometries. If you take on more hardware risk by designing more of the system yourself, you have to be able to perform verification more effectively and that means you must know when it can perform the necessary functions, not that there are no bugs in it.

This is a very big change in mindset and the industry will not make the transition quickly, but it has to eventually. Those that master it first will have a large cost advantage.


Dan G Ganousis says:

“When you buy a processor core from Arm, you expect it to work and do not expect to have to do any verification related to that.” That statement is true but what the media hasn’t looked closely at is the ability that RISC-V affords to modify a processor. That sounds really wonderful UNTIL you realize – Hey! now verification is OUR responsibility. That epiphany is causing MANY RISC-V fans to re-think if they really want to own the responsibility to verify a processor.

Leave a Reply

(Note: This name will be displayed publicly)