Knowledge Center
Navigation
Knowledge Center

Verification

popularity

Description

Functional Verification

The cost to create a new design is large – very large. While it was possible in the very early days of chip design, to get it right the first time, design size and complexity soon grew to the point that it was very likely that an engineer would make some mistakes. In order to locate those mistakes we have to exercise the design and see if it responds in the correct way. This is an act of verification. When a problem is found, we move into a debug phase where we get to understand what caused the problem and to decide upon how to fix the design. Then we go back to verification until we gain enough confidence that there are no more bugs left in the design. Today, system complexity is such that it is not possible to exercise all aspects of a design and thus we have to be very selective in the verification that is performed. It is not untypical today for more time, people and expense to be applied to verification than the actual design itself, and even with this large expenditure, most designs are first fabricated with several bugs still in them. With any luck these bugs do not completely stop the design from working and workarounds can be found such that a first release of the product is possible.

In the early days of verification things were very informal. It normally involved exercising a model and looking at the waveforms produced. The designer would decide if they were right or wrong and if right move onto the next experiment. This is no longer possible because of the number of tests and the fact that it is too difficult to rerun the experiments to see if anything changed. Today, verification has become a lot more structured.

Fundamentally, verification is the comparison of two models. The argument goes that if those two models were derived independently from each other, and they functionally match, there is a good chance that they are both right. So what do we mean by independently? We assume that two different people each read a specification and wrote their model without directly discussing it with each other. Those two people are generally from the design team and the verification team.

Why should the models be written without “directly discussing it’? If there is a problem with the specification – an ambiguity in the way that it is defined, you want that problem to become apparent during the verification process. You do not want the way each of them writes the model to be influenced by that discussion, unless it directly results in a clarification of the specification. If they find a problem in the specification, discuss it and correct the specification such that the ambiguity is removed. That is OK. In fact that was in itself an act of verification. This is an informal review of the specification.

There are two primary ways in which the models are compared and these are generally called static and dynamic verification methods.

Dynamic verification is most common and utilizes a simulator, emulator, or prototype. These methods exercise the model by sending sample data into the model and checking the outputs to see what the model did. If we send in enough input data, then confidence grows that the model will always do the right thing. The input data stream – usually called stimulus – is constructed in a way that it will take the model into various different parts of the code and we can compare the output data – usually called response – between the two models. When a difference is found, it either means the design model is incorrect, the verification model is incorrect, or as we have already implied – there is a problem with the specification. Each set of data that is sent into the models is called a test. The comparison between the two models is done by a checker. The stimulus can be constructed to take the design into a specific area of functionality, normally called a directed test, or can be based on randomized data streams. Now we don’t just send in any random data – that is unlikely to fully exercise the model, instead it is controlled randomization and usually referred to as pseudo random, constrained random or directed random testing.

Static verification, or often called formal verification, is a mathematical proof that the two models are identical under all conditions. No stimulus is necessary as all possible stimuli are considered. The second model (the verification model) is usually constructed in a different way using what are called properties. A property defines a behavior that must be exhibited by the design. Alternatively it may define something that must never happen.

An additional description, required by both verification methodologies, is a set of constraints that limits verification to a legal set of states, inputs or activities.

Verification is an attempt to maximize the quality of the design in the most effective manner, and the ways in which this is done will be different for each company and sometimes for each individual product. For example, the quality level for a cheap child’s toy is lower compared to an implantable medical device. This means there is no one right way to do verification and many people have called it an art rather than a science.

Formal Verification

There are several types of formal methods used to verify a design.
The first is equivalence checking. This takes two designs, that may be at the same or different levels of abstraction and finds functional differences between them. Equivalence checking can be further broken down into two principle domains – combinatorial equivalence checking, often just called equivalence checking, and sequential equivalence checking.

Combinatorial equivalence checking is used to show that a design after logic synthesis is the same as the input RTL description. It is also used to show that no functionality has been affected when test logic or other gate-level transformations are made. It compares the combinatorial logic between corresponding registers in the original description.

Sequential equivalence checking is an emerging technology that can take two designs that have fundamentally different timing, enabling an un-timed or partially timed model to be compared against an RTL model, or between RTL models that have undergone retiming or other transformation to improve power or other design qualities. This is a necessary technology for mass adoption of high-level synthesis.

The other main branch of formal verification is property checking. A property defines a behavior that must be present in a design, or a behavior that must not be possible. A third option is called a fairness condition that would ensure that each of several options is not treated unfairly, in arbitration for example.

In some cases it may be difficult to prove a property and instead of providing a full proof, a bounded proof is all that is possible. A bounded proof may say that the behavior is, or isn’t, possible within a defined number of clocks, or from a specific starting state, or when certain constraints are applied to the system.

 

 

Multimedia

Which Verification Engine When

Multimedia

Formal Signoff

Multimedia

Heterogeneous Computing Verification

Multimedia

Formal Datapath Verification

Multimedia

Planning Out Verification