Merging Verification With Validation

Are these really separate tasks, or just a limitation of tools and flows?

popularity

Verification and validation are two important steps in the creations of electronic systems and over time their roles, but how they play together is changing. In fact, today we are seeing a major opportunity for rethinking this aspect of the flow, which could mean the end of them as separate tasks for many of the chips being created.

As with many things in this industry, however, squeezing it out of one part of the flow may just make it pop up in a different place, making it someone else’s responsibility.

First, we must define the terms. “If we define verification as the pre-silicon checking of an ASIC’s functionality against design intent, and validation as the post-silicon checking of a system’s functionality against market requirements, then we can understand why verification and validation have been treated as different tasks in the past,” says Michael Thompson, verification architect for Oski Technology. “Both technical and organizational reasons exist for this.”

Let’s start with verification. “Verification is a finite logical problem,” says Doug Amos, product marketing manager for Mentor, a Siemens Business. “We are asking the question, ‘What is the chance that my hardware has bugs?’ Modern coverage-driven techniques can give us a logical metric-based answer to that question, with the aim of getting as close to ‘no chance’ as possible. Our agony in verification is that we never reach ‘no chance,’ so we are left to decide when we are ‘close enough,’ and the tradeoff between conscience and pragmatism begins.”

The “never” comes about because of engine limitation. “For multiple generations of ASIC devices, simulation has been a primary verification tool,” explains Thompson. “Simulation methodologies have not been up to the task of scaling their stimulus generation, response prediction and checking to a system-level environment or the other way around.”

Additional engines became necessary. “Different types of design may require different verification strategies, tools and testing environments,” says Zibi Zalewski, general manager for Aldec’s Hardware Division. “Big SoC designs require complicated flows, multiple tools including virtual prototype, simulator, emulator and prototyping board ideally integrated together and scalable, with tests reusability for different stages of verification.”

As a result, the need for validation has increased. So what is validation? “Validation is not so objective and logical,” warns Amos. “The ideal validation environment might involve building a series of progressively better versions of the design and exposing each to an infinite number of monkeys until one is sure that it is perfectly fit for purpose, and nothing can be broken—accidentally or deliberately. That’s the ideal. But in the same way that the ‘no chance’ verification ideal is never reached, neither is complete validation. This is partly owing to cost and time, those two omnipresent barriers to perfection in any project, but also largely because some aspects of validation success are measured against subjective criteria, not logical ones.”

A lot of progress has been made in both engines and methodologies. “People designed IP themselves, and then IP became the object of reuse. And then the sub-system becomes the object of reuse, which is an assembly of IP blocks and software – and which is also IP,” explains Frank Schirrmeister, senior group director of product management for the System & Verification Group at Cadence. “Complexity was a driver for that. Something similar is happening with respect to validation, specifically when it comes to the definition of intent. Intent used to be defined at the IP level. It moved up to the sub-system level. The definition of intent and the definition of requirements grows with complexity. Verification and validation are merging, or at least getting closer together, where the chip straddles the system and the board. But while it is doing that, the intent as you get toward systems of systems is much more complex to define.”


Fig. 1: Verification and validation come together. Source: Semiconductor Engineering.

Over time, what was validation has become verification. “What was typically a validation task will become part of the verification plan,” says Larry Melling, product management director in the System & Verification Group of Cadence. “It will shift left and we will be able to define what we need to do before we are willing to sign off.”

More sophistication is brought into validation until it becomes synonymous with verification. “In the past we were connecting everything up in an emulator or FPGA-prototype, loading the software, switching it on to see if it works,” says Schirrmeister. “That was the extent of validation. Now we are adding the smarts and defining the scenarios and defining the cases we need to validate. Is that validation or verification of the intent that had been specified? Where does verification stop and validation start?”

Portable Stimulus
The recent attention being given to Portable Stimulus is driving the integration of these two disciplines. “Portable stimulus is a powerful concept, aiming to allow chip design teams to formulate their verification and validation intent once, and then use that single specification at any stage of the development cycle,” says Rupert Baines, CEO for UltraSoC. “All of the EDA companies are embracing the idea of a verification/validation model that scales end-to-end across the development process, from design through simulation, emulation, and into silicon and system validation. Synopsys terms this the ‘verification continuum,’ but everyone acknowledges the need for it and is working to fix it.”

Tom Anderson, technical marketing consultant for OneSpin Solutions, takes this one step further. “A portable stimulus tool can, from a single high-level model, generate tests tuned for each verification and validation platform. These tests include the code to run on the embedded processors and, when needed, access the chip inputs and outputs via the available mechanisms for each platform. This makes it possible to check verified functionality on a validation platform, decreasing the amount of time needed to get production software running. It also allows bugs found during validation, or even during bring-up of fabricated chips, to be reproduced in simulation for easier debug.”

The act of writing something down can make a difference. “Portable Stimulus means you have a definition of intent that is written down in an executable form and is repeatable,” says Schirrmeister. “That makes it verification. Just like the written spec used to be the reference for the RTL and you could verify that with a testbench, now you are transforming what used to be validation in the past at the SoC level and it becomes verification. Now validation moves to a higher-level where I am putting them together at the multi-chip level.”

This provides a rich validation suite that can be executed pre-silicon. “A lot of validation suites had classically been proof of life,” says Melling. “You go through and address all of the components and make sure you could talk to them, read and write their registers. With Portable Stimulus, we can run coherency tests and power management tests against coherency and crosses of these activities. Now, when we do get first silicon, you have greatly increased the chance of success.”

Engine extensions
One of the drivers for Portable Stimulus is the range of engines now available for verification, some of which were only used for validation in the past. “FPGA-based prototyping and emulation straddle each other,” says Schirrmeister. “They were used for verification and validation but there is a gray area where they overlap. We are approaching an age where they go hand in hand and where they become connected. If you look at the DNA of validation and compare that to the DNA of verification, they are very similar. But they require different speed and different debug capabilities.”

“There has been an explosion in emulation and FGPA prototyping because the software takes too long,” says Mike Gianfagna, vice president of marketing at eSilicon. “This affects bring-up, which is basically a two-phase approval. One part of that is verifying, ‘Does it wiggle?’ If it does, can you run it in scan-chain mode and run it at speed? After that, you have a horrendously long validation cycle.”

Verification and validation are coming together with some aspects of test, as well. “When bringing in the real silicon, it comes together with what people do in the test domain,” says Schirrmeister. “Think about Design to Test – later in the lab they become tests that you run in National Instruments Labview. When you bring that together with portable stimulus, it brings validation/test together with verification.”

Not everyone is willing to go this far. “Verification and Validation are not about what engines are available, but rather they are all about what tasks need to be done,” says Amos. “Engines such as emulation can indeed accurately run software, but not all of it owing to a lack of execution speed. Even FPGA prototypes, the fastest pre-silicon engine, may lack that required speed for full stack operation. Often hybrid techniques are used; exiling some of the system functionality into a transaction level approximation, such as an Arm Fast Model. In this way, an emulator can run enough of the software to do some serious hardware-software co-validation. Taken to an extreme, the whole system can be modelled at the transaction level, creating a virtual prototype, which is not just a pre-silicon but also a pre-RTL approximation as well; in fact, an excellent environment in which to validate user interfaces and aesthetics.”

Reaching into software
It is when the subject of software is introduced that it starts to get more complex. “Consider a software stack, with the lowest levels such as the boot code or the Board Support Package being most closely dependent upon the hardware functionality—and the upper levels of the stack, such as the application and user space, being completely divorced from it,” says Amos. “Typically, verification requires just enough of the software stack in order to exercise the hardware, whereas validation needs all of it—the full chip, the whole stack.”

While the Portable Stimulus committee has been working on this problem, the committee is not yet fully on board. “One capability specified in the requirements document is the need for a Hardware/Software abstraction Interface (HSI),” says , CEO of Breker. “It is not in the latest draft, but needs to be before this problem can be fully solved.”

“HSI isn’t in there, but that is not a limiter to the 1.0 spec,” says Melling. “There is a facility to plug in what drivers and sequences that you have. HSI is about what automation we need to make the job easier to create those drivers, and have portability in terms of describing it once and having UVM sequences or C-code drivers or whatever is needed for the testing you want to do. We are not stalled from doing testing at those levels, we just have to plug in a sequence or a driver that enables it. With 1.0 we are focused on the validation of hardware, and not the validation of the hardware/software system.”

Beyond functionality
Unfortunately, life is messy. “Eventually, all systems have to deal with the analog world, where other unpredictable and possibly malicious systems (and people) may break even the most heavily verified design,” says Amos. “Verification alone is not enough. The more complete and accurate one’s validation scenarios, the less likely one’s design is going to be floored during the first round.”

Also, verification may just be too clean and predictable. “Oftentimes, the real issues — whether they are show-stopping bugs or more subtle problems that reduce performance or increase power consumption by just a few percentage points — will only surface in real silicon,” says UltraSoC’s Baines. “They show up with actual code running at full speed with real world interfaces.”

Amos notes that the biggest drivers for validation today are safety and security.

“Functional safety requirements are part of the system specification, including how the end product must respond to random errors, such as an alpha particle flipping a memory bit,” says Anderson. “A formal fault injection tool can analyze the design to determine which faults will not affect operation and are therefore safe, and which can propagate and cause trouble. These faults must be detected, and either be corrected or raise an alarm so that the system can take appropriate remedial action. Thus, a critical aspect of validation can be formally checked during the verification phase, in parallel with simulation.”

Facilitating the future
“It is often said that bringing verification and validation together will reduce the overall time to market of the system, if not the ASIC,” says Oski’s Thompson. “If we assume that is true, then it would seem that Portable Stimulus may add value if it can facilitate porting stimulus from the system level down to the ASIC level and perhaps to the block level. Note that this is the opposite direction than most people assume. In order for this to happen, the industry needs to find a way to get all system components ready before the ASIC verification team gets their hands on the RTL code.”

One emerging approach is to incorporate embedded monitoring into hardware, which would enable greater visibility into the operation of silicon. The associated analytics systems would provide valuable insights that allow engineers to fine-tune performance and understand complex hardware/software interactions. “The ability to collect and utilize data from field trials and even in-life operation provides a powerful feedback loop,” says Baines. “This ties the whole development flow together, from top-to-bottom and end-to-end, not only making the systems and software engineering tasks easier, but also providing actionable insights back to the silicon team. That allows them to fix bugs in the silicon or IP itself, and to performance-enhance or cost-reduce future generations of chips.”

Some people choose to look out even farther into the future. “The question we’re starting to ask is whether any or all of that can be shortened with deep learning?” asks eSilicon’s Gianfagna. “This would be a good tool for these kinds of issues. Deep learning takes the general problem, which is that more and more system designers need silicon to make their dream come true, and then they can map algorithms to silicon.”

What is clear is that certain aspects of validation are being consumed by the verification team. Will this lead to an elimination of the validation team? That is far less likely. What is more probable is that it will free the validation team up to look at problems associated with safety, security and include increasingly large aspects of integration.



3 comments

Dustin Aldridge says:

I certainly do not get general agreement on these definitions and they are not from the semi-conductor industry, but they work for me to separate the terms.
Verification: Processes used to provide objective evidence that a product or system has the potential to satisfy the requirements for a specific intended use and application.
Qualification: Processes used to provide objective evidence that a product or system satisfies the requirements for a specific intended use and allocation including sources of variation.
Validation: A formal declaration with legal weight that the product, design, process, etc., satisfies the requirements for a specific intended use and application including variation.

Brian Bailey says:

Thanks Dustin. In the past I have used the IEEE definitions, but I think you are right that we are going through a period of change in what these definitions mean, especially in the light of safety and security requirements. It would appear that qualification has replaced what used to be closer to validation and validation has taken the previous definition a lot further. I would be interested to hear other opinions about this.

Peter Salmon says:

In the late 1970s I worked at Intel on a video game chip. We were running out of time to test this chip using the usual procedures, including the use of an expensive Sentry 600 test system. So I set up a probe station to exercise the chip by playing the video game. I confirmed all of the important modes of the chip and also identified an error. I fixed the error by cutting a couple of traces under a microscope, using the tungsten probe tip, and adding a few gates of external TTL logic to reinject the signal. Then I verified the fix by playing the game again.
This stimulated my thinking about validation of complex systems. I filed US Patent 7,505,862 to cover a test system comprising a powerful test chip embedded on the system board. It also covered a quick way to create system level test vectors. To this day, I think the method is worth considering for system level testing, where time to market is critical.

Leave a Reply


(Note: This name will be displayed publicly)