Knowledge Center
Navigation
Knowledge Center

Testbench

Software used to functionally verify a design
popularity

Description

The term testbench comes from the days where electronics was tested on a bench using various pieces of test equipment, such as signal generators, oscilloscopes, logic analyzers etc. While some of this is still done for system level verification and high-speed verification, almost all functional verification is now performed in a virtual world using simulators.
The design being tested is usually referred to as the design-under-test (DUT). Many people in the industry do like using the term test because of confusion with the notion of manufacturing test, where a device is being tested to see if it was manufactured correctly. The act if verification is one of attempting to find out if a design will perform according to a specification and thus would prefer to refer to it as the design-under-verification (DUV). Accordingly, the testbench is called a verification environment. All of these terms are in common usage.

Minimum testbench components and directed test

The minimum requirements of a verification environment are to inject stimulus into the design and collect results. In the simplest case, the inputs and outputs are held in files and all processing is performed outside of the verification environment. When the stimulus has been created ahead of time it is generally referred to as directed test. That is, each test is directed towards the verification of defined functionality. The results are checked against a golden set of vectors known to be the correct output.

Directed test is tedious to create and costly to maintain. Every time a change is made to the design, the golden output is likely to change. This in turn means that the new output has to be manually scanned to see if the new output is also correct. As designs have become larger and more complex, the creation of directed tests has become an almost impossible task and reaching closure very difficult.

Coverage and closure
Perhaps the most important question that a verification team has to ask is: when am I done? While this may sound like a simple statement, it is almost impossible to quantify precisely. First, a modern system has an almost infinite set of states that it can be in and even visiting each of those states in a simulator, that may run many orders of magnitude slower than real-time, would take a lifetime. At the same time, economics also dictates that a product that has been verified exhaustively is unlikely to be cost effective and is also probably late to market. Thus, completeness of verification has to be balanced against cost.

Coverage is a way to measure progress towards a verification goal and there are many coverage models to choose from. It is almost impossible to reach 100% coverage for many of those models and so closure is the process of deciding if sufficient coverage has been reached, that the initial goals were reasonable and that the cases not covered are acceptable. The objective is to minimize the risk that a defect still exists in the design that may cause problems in the final product.

Checking, Assertions and Constrained Random
It has already been mentioned that performing a check against a golden model is problematic. While showing that two things are identical, analyzing differences to see if they are important is more difficult. In addition, directed testing began to be replaced by constrained random test pattern generation. This has the ability to generate large vector sets, but it is unknown what any particular test is targeting. This significantly changed the role of the verification environment. The checker now had to verify that any output created, given a random set of input, was correct and the only way to do this is with a second model of the intended functionality. This second model is usually at a higher level of abstraction than the design because it does not have to define how something is to be achieved, only what the result should be. Sometimes that model may come from an earlier stage of the design process, although care has to be taken with this, and other times is written by the verification team.

The model is often split into several parts. The checker may be divided into a checker and a scoreboard. The role of the scoreboard is, for example, to keep track of packets moving through a system. Given that a packet was injected into the system, the abstract model may not know when it should come out, only that it should happen eventually and the changes that could be made to it. Thus the packet is put onto the scoreboard when it is injected and the checker looks for and removes a packet when it comes out of the system. Any packet left on the scoreboard at the end of simulation could be a dropped packet. This example shows how the scoreboard and the checker work together to verify the system outputs.

Another way to verify the functionality is with assertions. These define rules that must be obeyed. For example, at a road intersection, no two lanes of traffic are allowed to be green at any point in time. If during a simulation this event happens, an error would be created.

Raising Abstraction
The verification environment, as already stated, does not need to know how something will be done in the design, only what is should do. This means that many aspects of the verification environment can be modeled at a higher level of abstraction. The most common method is to use transaction-level modeling. As an example of a transaction, a processor may make a read or write request. That request is a simple transaction. Within the design, that bus cycle will be a complex sequence of signal changes on a number of wires that places the address of the read or write request onto the bus, orchestrates an interaction with memory or peripheral and results in data either be written or read. A processor bus is often an interface between the design and the verification environment and so an abstraction shifter needs to be placed between them. Because of the previous example, this is often called a Bus Functional Model (BFM) even though it may perform a similar function on any interface.


Related Technologies