Knowledge Center
Navigation
Knowledge Center

Functional Verification

Functional verification is used to determine if a design, or unit of a design, conforms to its specification.
popularity

Description

The cost to create a new design is large—very large. While it was possible in the very early days of chip design to get it right the first time, design size and complexity soon grew to the point that it was very likely an engineer would make some mistakes. To locate those mistakes, we have to exercise the design and see if it responds in the correct way. This process is an act of verification. When a problem is found, we move into a debug phase where we get to understand what caused the problem and to decide upon how to fix the design. Then we go back to verification until we gain enough confidence that no more bugs are left in the design. Today, system complexity is such that it is not possible to exercise all aspects of a design, and thus we have to be very selective in the verification that is performed. It is not untypical today for more time, people and expense to be applied to verification than the actual design itself, and even with this large expenditure, most designs are first fabricated with several bugs still in them. With any luck these bugs do not completely stop the design from working and workarounds can be found such that a first release of the product is possible.

In the early days of verification things were very informal. It normally involved exercising a model and looking at the waveforms produced. The designer would decide if they were right or wrong and if right move onto the next experiment. This is no longer possible because of the number of tests and the fact that it is too difficult to rerun the experiments to see if anything changed. Today, verification has become a lot more structured.

Fundamentally, verification is the comparison of two models. The argument goes that if those two models were derived independently from each other, and they functionally match, there is a good chance that they are both right. So what do we mean by independently? We assume that two different people each read a specification and wrote their model without directly discussing it with each other. Those two people are generally from the design team and the verification team.

Why should the models be written without “directly discussing it”? If there is a problem with the specification—an ambiguity in the way that it is defined—you want that problem to become apparent during the verification process. You do not want the way each of them writes the model to be influenced by that discussion, unless it directly results in a clarification of the specification. If they find a problem in the specification, discuss it and correct the specification such that the ambiguity is removed. That is OK. In fact that was in itself an act of verification. This is an informal review of the specification.

There are two primary ways in which the models are compared and these are generally called static and dynamic verification methods.

Dynamic verification is most common and uses a simulator, emulator, or prototype. These methods exercise the model by sending sample data into the model and checking the outputs to see what the model did. If we send in enough input data, then confidence grows that the model will always do the right thing. The input data stream—usually called stimulus—is constructed in a way that it will take the model into various different parts of the code and we can compare the output data—usually called response—between the two models. When a difference is found, it either means the design model is incorrect, the verification model is incorrect, or as we have already implied—there is a problem with the specification. Each set of data that is sent into the models is called a test. The comparison between the two models is done by a checker. The stimulus can be constructed to take the design into a specific area of functionality, normally called a directed test, or can be based on randomized data streams. Now we don’t just send in any random data—that is unlikely to fully exercise the model. Instead it is controlled randomization and usually referred to as pseudo random, constrained random or directed random testing.

Static verification, or often called formal verification, is a mathematical proof that the two models are identical under all conditions. No stimulus is necessary as all possible stimuli are considered. The second model (the verification model) is usually constructed in a different way using what are called properties. A property defines a behavior that must be exhibited by the design. Alternatively it may define something that must never happen.

An additional description, required by both verification methodologies, is a set of constraints that limits verification to a legal set of states, inputs or activities.

Verification is an attempt to maximize the quality of the design in the most effective manner, and the ways in which this is done will be different for each company and sometimes for each individual product. For example, the quality level for a cheap child’s toy is lower compared with an implantable medical device. This means there is no one right way to do verification, and many people have called it an art rather than a science.


Multimedia

Heterogeneous Computing Verification

Multimedia

Planning Out Verification

Multimedia

Tech Talk: Improving Verification

Multimedia

Tech Talk: Faster Simulation

Multimedia

Tech Talk: Verification

Multimedia

Tech Talk: eFPGA Verification

Multimedia

Tech Talk: SoC Protocol Debug

Multimedia

Tech Talk: Power Emulation


Related Technologies