Formal Verification 101

Personally, I prefer a simple story that doesn’t make my eyes water, so here’s my “101” version of the Formal Verification Space (where no one can hear you scream)…


By Clive “Max” Maxfield

The first time I came into contact with the concepts of a digital hardware description language (HDL) and digital logic simulation, I inherently understood how it all “worked.” The idea that the statements in the modeling language acted in a concurrent manner just seemed to make sense.

By comparison, trying to wrap my brain around formal verification has always made my head hurt. I remember one year at DAC making a point to visit all of the EDA vendors with a formal verification presence and ask them to explain things like the difference between an “attribute” and a “property”.

They were all kind enough to do so. The only problem was that they all said completely different things and defined terms in completely different ways. Personally, I prefer a simple story that doesn’t make my eyes water, so here’s my “101” version of the Formal Verification Space (where no one can hear you scream.)

Different Formal Flavors

As a tidbit of trivia, formal verification isn’t as new as many people believe. Although many of us weren’t aware of it at the time, large computer and chip companies like IBM, Intel, and Motorola actually started developing and using formal verification tools internally since the mid-1980s.

For quite some time, a lot of folks considered the term “formal verification” as being synonymous with “equivalency checking,” where an equivalency checker is a tool that uses formal (rigorous mathematical) techniques to compare two different representations of a design – say an RTL description with a gate-level netlist – to determine whether or not they have the same input-to-output functionality.

In fact, equivalency checking may be considered to form a sub-class of formal verification called “model checking,” which refers to techniques used to explore the state-space of a system to test whether or not certain attributes are true. Those attributes are specified in the form of assertions/properties. (Definitions of terms such as “property” and “assertion” are presented later in this article.)

Another category of formal verification known as “automated reasoning” uses logic to prove – much like a formal mathematical proof – that an implementation meets an associated specification. But that’s outside the scope of these discussions. For the purposes of this article, we shall focus on different aspects of the model-checking domain.

Why is Formal Cool?

It can be difficult to wrap one’s brain around formal verification, so let’s take this step-by-step. Suppose we have a digital chip design that consists of a number of functional blocks. Let’s further assume that we have an RTL description of each block. Before we connect them all together, how are we going to verify each block to ensure that it performs the task we require of it?

One way would be to create a testbench comprising a sequence of signals we wish to apply to the block’s inputs, along with the responses we expect to appear on the block’s outputs. We would then use a digital logic simulator to apply our stimulus to the virtual (RTL) representation of our block to see if it functions as planned.

Although useful, this approach has several issues we need to consider. In the case of the testbench (which we have to create), this will only verify the cases we thought to test. What about those slippery little “corner cases” we didn’t even think about? (In this context, a general definition of a “corner case” or “corner condition” would be a hard-to-exercise or hard-to-reach functional condition associated with the design.) Even worse, when we gather multiple blocks together, we have to discard our original block-level testbenches and create a new one from the ground up.

As an alternative, we could associate one or more assertions/properties with our block. These assertions/properties may be associated with signals at the interface to the block and/or with signals and registers internal to the block.

A very simple assertion/property might be along the lines of, “Signals A and B should never be active (low) at the same time.” But these statements also can extend to extremely complex transaction-level constructs, such as “When a PCI write command is received, then a memory write command of type xxxx must be issued within 5 to 36 clock cycles.” And yet another form of assertion relates to the assumption a designer makes when implementing a block, such as “I assume that I will never receive three writes in a row.”

One really important concept here is that of verification reuse. Once you’ve associated a bunch of assertions/properties with a block, they remain valid when that block is combined with other blocks to form a cluster of blocks or an entire system. For example, those assertions/properties that relate to implementation assumptions will ensure that these assumptions remain true or they will issue appropriate error messages.

Static vs. Dynamic

For the purpose of these discussions let’s stick with a simple assertion/property such as, “Signals A and B should never be active (low) at the same time.” One term you will hear a lot of is assertion-based verification (ABV). This comes in two flavors: static formal verification and dynamic formal verification.

In the case of static formal verification, an appropriate tool reads in the functional description of the design (typically at the RTL level of abstraction) and then exhaustively analyzes the logic to ensure that this particular condition can never occur (or that it defiantly will occur depending on what you are trying to do).

By comparison, in the case of dynamic formal verification, an appropriately augmented logic simulator (or post-simulation analyzer and debugger) will flag a warning if this particular condition ever does occur (or if it doesn’t occur – again, this depends on what you are trying to do).

One problem with static formal verification is that it can typically be used for only small portions of the design, because the state space increases exponentially with complex properties and one can quickly run into a “state space explosion.” By comparison, logic simulators can cover a lot of ground, but they do require stimuli and they don’t cover every possible case.

In order to address these issues, some solutions combine both techniques. For example, they may use simulation to reach a corner condition and then automatically pause the simulator and invoke a static formal verification engine to exhaustively evaluate that corner condition. Once the corner condition has been evaluated, control will automatically be returned to the simulator, which will then proceed on its merry way.

As a simple example of where this might be applicable, consider a first-in first-out (FIFO) memory. Its “Full” and “Empty” states may be regarded as corner cases. Reaching the “Full” state will require a lot of clock cycles, which is best-achieved using simulation. But exhaustively evaluating attributes/properties associated with this corner case – such as the fact that it should not be possible to write any more data while the FIFO is full – is best achieved using static techniques.

Assertions vs. Properties

It should be noted that the following was gleaned from my talking with lots of folks and then desperately trying to rationalize the discrepancies between the tales they told.

The term “property” comes from the model-checking domain and refers to a specific functional behavior of the design that you want to formally verify. For example, “after a request, we expect a grant within 10 clock cycles.”

By comparison the term “assertion” stems from the simulation domain and refers to a specific functional behavior of the design that you want to monitor during simulation, and therefore flag violations if that assertion “fires.”

Today, with the use of formal tools and simulation tools in unified environments and methodologies, the terms “property” and “assertion” tend to be used interchangeably – that is, a property is an assertion and vice versa.

In general, we understand an assertion/property to be a statement about a specific attribute associated with the design that is expected to be true. Thus, assertions/properties can be used as checkers/monitors, or as targets of formal proofs, and they are usually used to identify/trap undesirable behavior.

Formal Languages and Stuff

This is where things could start to get really confusing if we’re not careful. We’ll start with something called Vera, which began life with work done at Sun Microsystems in early 1990s. It was provided to Systems Science Corp. somewhere around the mid-1990s, which was in turn acquired by Synopsys in 1998.

Vera became an entire verification environment that encapsulates testbench features and assertion-based capabilities, and Synopsys originally promoted it as a standalone product, with integration into the Synopsys logic simulator. Sometime later, due to popular demand, Synopsys opened things up for third-party use by making OpenVera and OpenVera Assertions (OVA) available.

Somewhere around this time, SystemVerilog was equipped with its first-pass at an “assert” statement. Meanwhile, due to the increasing interest in formal verification technology, an Accellera standards committee started to look around for a formal verification language it could adopt as an industry standard. A number of languages were evaluated, including OVA, but in 2002 the committee eventually opted for the Sugar language from IBM.

Just to add to the fun, Synopsys then decided to donate OVA to the Accellera committee in charge of SystemVerilog (this was a different committee than the one evaluating formal property languages). Now pay attention because this bit’s confusing. Although Synopsys had donated OVA to Accellera, the folks at Synopsys also kept using OVA themselves. Meanwhile, like most standards committees, Accellera can’t accept a donation like OVA “as-is” because that would give the donating vendor a competitive advantage. Thus, they modified the OVA syntax and changed some of the underlying fundamentals to come up with SystemVerilog Assertions (SVA).

Yet another Accellera committee ended up in charge of something called the Open Verification Library (OVL), which refers to a library of assertion/property models available in both VHDL and Verilog 2K1.

So now we have the assert statements in VHDL and SystemVerilog, OVL (the library of models), OVA (the Synopsys-specific assertion language), SVA (the SystemVerilog version of OVA), and the Property Specification Language (PSL), which is the Accellera version of IBM’s Sugar language. It’s also probably worth noting that the latest versions of SVA and PSL are syntactically consistent in many respects.

Are we having fun yet?

The advantage of PSL is that it has a life of its own in that it can be used independently from the languages used to represent the functionality of the design itself. The disadvantage is that it doesn’t look like anything the hardware description languages design engineers are familiar with, such as VHDL, Verilog, C/C++, etc., but then neither do OVA or SVA. There is some talk of spawning various flavors of PSL, such as a VHDL PSL, a Verilog PSL, a SystemC PSL, and so forth. The syntax would differ between these flavors so as to match the target language, but their semantics would be identical.

As usual, it’s difficult to wrap one’s brain around all of this, but perhaps Figure 1 will provide a useful visualization.

Figure 1. Trying to put everything into context and perspective.

Figure 1. Trying to put everything into context and perspective.

It’s important to note that this illustration just reflects one view of the world and not everyone will agree with it. I flatter myself that some folks will consider this to be a brilliant summation of an incredibly confusing situation. On the other hand, others will doubtless regard it as being a gross simplification.

Who Are All the Players?

This is where things really start to get fun, because I’m sure to leave someone out and then they will get cross with me and I’ll be struck off their Christmas card list.

Of course, it practically goes without saying that the “big boys” in this arena are Synopsys, Mentor Graphics and Cadence. Also, there used to be a positive plethora of independents, but many of these seem have either been acquired or to have fallen by the wayside. Of the independents that are still standing, the ones that spring to mind are Calypto, Jasper, Real Intent and OneSpin.

Leave a Reply

(Note: This name will be displayed publicly)