Fixing Functional Coverage

Challenges persist for catching all the bugs, utilizing assertions, and expanding coverage to the entire system.

popularity

Constrained random test pattern generation entered the scene a couple of decades ago as a better way to spend time and resources for the creation of stimulus.

Stimulus definition had become an arduous task—defining the patterns necessary to exercise designs of increasing size. It was successfully argued that spending time writing models instead of creating stimulus and having a computer perform the generation was a better use of valuable resources. This created another problem – what did a generated test actually verify? The answer came in the form of .

But functional coverage is far from perfect. A coverage point is an indicator that a certain piece of logic has been exercised, but it does not directly correspond to functionality. “We are not really measuring functional coverage,” says Dave Kelf, marketing director at OneSpin Solutions. “The ideal functional coverage metric would be derived from a measurement against a comprehensive specification of a design’s operation.”

Providing this metric is difficult, both due to a lack of formalization in a specification and a disconnect between the high-level spec and the lower level of verification measurement capability. “The gap between the spec and the coverage model is partially filled with the verification plan,” says Michael Horn, verification technologist at Synopsys. “This provides the linkage to the spec and captures the features that you care about that are then linked to coverage.”

The verification team has to translate the functional aspects that they want to verify into a minimal set of these indicators, called a coverage model. Specify too many and the verification task becomes too large, specify too few, or the wrong ones and the verification effort may be incomplete. “There is no silver bullet that we can use to replace good engineering practices with cohesive automation,” says John Brennan, product marketing director for vManager at Cadence. “We are dealing with a problem of infinite dimensions and when you try and apply automation it is not a practical solution.”

Adds Pranav Ashar, chief technology officer for Real Intent: “The link between chip failure and the coverage metrics in use today is tenuous to put it mildly. Bugs are missed, and excessive as well as potentially gratuitous simulation is done in the current paradigm. A better approach would be to analyze failure modes and base coverage metrics on parameters that more directly relate to the failure modes.”

Harry Foster, chief methodologist at Mentor Graphics, believes the coverage model we currently have is problematic. “We see many people creating coverage models that measure things that are not really relevant or important and then have problems with closure. They create coverage points and start crossing them until it blows up. There are three challenges:
1) Defining what we need to measure. People have to start thinking from the spec and what needs to be covered rather than just creating coverage points. We cannot automate this. It requires thinking.
2) Modeling it. Taking what it is that I want to measure and creating the appropriate model. Again this is a manual process.
3) Closing coverage. This is primarily a debug effort. There have been breakthroughs in this area, including formal technologies that can identify unreachable areas and provide hints.

But the industry does have alternatives. “There is a coverage model that includes notions of concurrency and sequentiality,” explains Rebecca Lipon, senior product marketing manager for the functional verification product line at Synopsys. “They are called assertions. They are underutilized because they seem to be very difficult for people to write.”

Many in the industry agree. “The easiest way to formalize a specification is with the use of assertions,” says Kelf. “Of course, the more complex the assertion, the harder it is to get it right and the more resources it consumes to execute the assertion. We have to break down the spec into simpler assertions, and this is the job of the test plan and emerging verification management tools.”

Specifying coverage using assertions has other benefits including the use of formal methods. “An assert property written to verify the functional behavior of the design can be used in several ways,” says Jin Zhang, senior director of marketing for Oski Technology. “First, formal analysis can automatically detect dead code or unreachable states in the design so that verification engineers don’t have to try in vain to reach those coverage points. Second, an interesting cover property that represents certain functional behavior in the design can be analyzed to see if the scenario is reachable. The result of this analysis is input vectors that can be used in a simulator. And third, formal tools can be used to prove whether a property holds in the design or fails.”

Additional problems can be found with the existing coverage model once we start considering the system level. “What we have in place today works well at the IP level,” says Foster. “At the system level it totally falls apart. We need completely different ways to think about it.”

He’s not alone in seeing that. “System level testing is not the same thing,” says Brennan. “This is directly about the functionality that is being delivered and is related to use-cases.” Use cases appear to be one area that a lot of the industry is focusing on for system-level functional coverage, but the system-level contains other areas that have to be considered.

Lipon adds one area where new metrics are required. “At the system-level there are interrelated components that affect performance and metrics for this need to be encapsulated in a higher level of abstraction.”

Foster adds several others. “We have a functional layer, clocking layer, power layer, security, software… Each layer requires new metrics.”

Kelf provides a familiar way that this can be achieved. “Many of these functions can be targeted using assertions, possibly operating at higher levels of abstraction such as a system graph specification.”

Adds Ashar: “Targeting failure modes in interfaces, security and similarly well-defined functionality would be a good start. To be sure, good verification engineers do apply such an approach within their testbenches, but such an approach has not percolated up to the tool level. The challenge is to develop EDA tools that can implement such an approach, combining function-type specific deep semantic analysis with formal methods and simulation so that the coverage and debug information can have deeper meaning.”

Foster says the problem has to be solved for the system level. “And in the process of solving that problem we will learn about more efficient ways in which we could do certain things, even for IPs, and we will be able to take what we have learned and apply it to that process as well.”

One area of active improvement is related to the way coverage data is collected, stored and analyzed. “The ability to move tests across platforms is a necessary step for this,” says Lipon. “It should not matter if you run things on an emulator, acceleration box, simulation, formal, static: all of the data has to come to the same place to be visualized and analyzed.” A new Accellera working group is actively looking at the test portability problem and the Unified Coverage Interoperability Standard (UCIS) is working on making coverage data interoperable.

“Acceleration typically uses a UVM style testbench and is an extension of simulation with the same notions of coverage,” explains Brennan. “When you move to emulation, coverage has not traditionally been applied. The coverage here would be different because it involves software and this is closer to system-level coverage.”

Finding solutions takes industry cooperation, but within EDA that is sometimes not as forthcoming as it should be. “In the world of software development there is a lot open source and cooperative development,” explains Lipon, “but in EDA we could achieve more if we worked closer together as an industry. It is surprising that in the hardware space, perhaps because we are so very competitive, that we don’t talk with one another more, we don’t share enough.”



2 comments

Using Formal for Functional Coverage | Oski Technology | Formal Verification Methodology says:

[…] Bailey’s recent article on “Fixing Functional Coverage” in Semiconductor Engineering (http://semiengineering.com/fixing-functional-coverage/) polled experts from different companies about the challenges of catching all the bugs, utilizing […]

EDACafe.com - Decoding Formal - Using Formal for Functional Coverage says:

[…] Bailey’s recent article on “Fixing Functional Coverage” in Semiconductor Engineering (http://semiengineering.com/fixing-functional-coverage/) polled experts from different companies about the challenges of catching all the bugs, utilizing […]

Leave a Reply


(Note: This name will be displayed publicly)