Verification And Validation Brothers

Is it true to call verification and validation brothers? Doug Amos tries to make the case, while I believe he doesn’t go far enough.


At DVCon this year, Doug Amos took the stage for the Mentor, a Siemens Business sponsored lunch presentation. For those of you who were there but decided to skip the lunch, expecting the traditional forced sales pitch, you made a mistake. Amos is one of those rare people who know how to inject humor, teaching and marketing into a single presentation such that the separation is clean, and the listener always knows which listening mode they are in. Too bad that he is about to retire.

Amos’s presentation talked about the differences between verification and validation and how many of the additional tasks that the industry is now having to face will rely on validation technologies a lot more than in the past. He looked at power, security and safety as three emerging areas and talked about how increasing amounts of software are making this a lot more complicated.

During his presentation, he introduced various well-known brothers, such as the Ewings, the Royal family and the Jackson brothers. He humorously tied each of them to various tasks or things that needed to happen in either the flow or in the tools. As an example, he talked about the Jacksons as being multiple engines working in close harmony or Harry and William being the formal brothers.

There was one strong message that weaved throughout the presentation. “The biggest elephant that could crush us all is the massive software content,” said Amos. “Integrating hardware and software is tough. There is more software going into our systems. It is not just the hardware dependent software that has to be integrated, it is the entire stack. And this happens late in the project when the marketing guys are breathing down your neck to see if you are finished yet.”

Amos noted that there are few standards to help with these tasks and that each company must work it out for themselves today. The common factor in all of the flows is FPGA prototyping because this is the only platform that has enough speed and accuracy to run a lot of software.

But let us step back and look at how Amos defined verification and validation.

Fig. 1: Verification and Validation. Source: Doug Amos, Mentor, DVCon 2018

Amos described it this way. “When you buy a shirt, you can verify several things about it. Has it got the right number of sleeves, is it the right size, is the right colors, does it have all of the buttons? These are all things that you can verify. Validations is more like asking if it fits. Can I drive in it comfortably? I do that by modeling a car and I ‘do TV driving’. Does the color match my eyes, can I afford it, will my date be impressed by it – these are important things.”

But the reason why one list can be verified and the other only validated is because of models. If we had a model of the body, we could easily verify if the shirt would fit and the tolerance it had for maneuvering when driving. How could we model these things for verification?

Verification and validation are not just brothers: I believe they are identical twins separated during their childhood by a tragic accident. That accident was constrained random stimulus generation. While constrained random allowed aspects of verification to become automated and transformed the generation of directed testing into a machine-driven methodology, it defied the notion that verification is the act of comparing two models.

With constrained random, the second model became a mishmash of overlapping models, many of which were only a proxy for what they were trying to express. That is why we have no model of the body against which we can ascertain if the shirt will fit. Thankfully, we may be able to at least partially correct this problem. In the presentation, Amos quickly flipped over a slide that talked about Portable Stimulus. He presented it in the same way that many others do as being a technology that allows tests to be ported from one platform to another. While this is important, it misses several key points.

First – Portable Stimulus will define a hardware/software abstraction layer that enables the verification environment to talk to either a register description, a driver or any layer of the software stack in a way that minimizes the user effort to do this. While I hear that this capability may not make it into first version of the standard, it would be a great loss if it does not make it.

Second – Portable Stimulus defines what the product is actually meant to do. In the shirt example, it will define what “fit” means because that is the primary objective of system-level verification. We assume that the buttons have been verified at the block level and that the structural assembly has been completed successfully. Now we ask – is it fit for purpose and that is both verification and validation.

Third – Constrained random has to be the most inefficient methodology ever defined when it comes to computer resources. As systems become more complex, a lot of time is spent working out what the testbench is doing and why. Portable Stimulus fixes many of those problems. Every test that it generates does something useful. Given the magnitude of the validation task, this is important. We cannot afford to be running tests that randomly wiggle unconnected things deep inside the design when we need to find out if the system is capable of performing task A and B at the same time and within a specified time.

We are getting close to the time when we can start to define what the standards are for validation and how both verification and validation play together in helping each other out (also see Merging Verification and Validation). I am not saying that Amos is wrong, but he doesn’t go far enough. Validation and verification are brothers and one may well be larger than the other, but they are tied together a lot closer than that, or at least they will be once the industry starts to head along the right path again.

Leave a Reply

(Note: This name will be displayed publicly)