Is Your IP-Verification Environment Trying To Kill You?

Death is not an option for design teams, but neither is insufficient verification coverage.

popularity

I was watching an old episode of The Office the other night. It was the one where a GPS guided the lead characters into a lake. While we’ve all fallen victim to a GPS gone bad. Most of us are fortunate enough not to trust technology blindly enough to drive into a lake (or in my case, onto the tarmac at Ft. Lauderdale International). Yet, it’s surprisingly easy to find parallels in real life where we do. IP-level verification is one example.

One of the most common measures of verification quality today is coverage. Typically code coverage, for example, is considered a signoff criterion. Referencing the GPS example above, 100% code coverage could be considered the destination and the test plan the turn-by-turn directions. Unlike the lake incident, engineers don’t necessarily get instant feedback when their verification progress is sinking to the bottom — especially when it involves their test environment itself.

This is particularly true with advanced methodologies like UVM (Unified Verification Methodology). UVM is designed to automatically update checkers to allow constrained random testing. Sometimes, when engineers make innocent tweaks to their test environment, something breaks. It’s akin to driving the car into a river. Only rather than detecting the error immediately, the UVM environment will update the checkers assuming that the broken environment was working perfectly, and the tests will pass. It’s like handing the driver scuba gear so they can continue to sit behind the wheel of their car, six feet under water, while the current slowly steers their project toward a waterfall. The good news is that the error eventually will be found. The bad news is that it may not occur for a week or two (when the body floats to the surface four miles down-river).

Such scenarios may seem rare but they are actually quite common. Recent customer examples include accidently putting an entire block into bypass mode; changing randomization such that the DUT is over-constrained; updating a transactor model such that the payload is always empty; modifying a testbench such that an entire family of stimuli (CPU opcodes) is excluded; etc. In each case, though the environment was broken, the checkers adapted automatically, resulting in a passing regression. On average, these errors take a week to detect – sometimes several. During that week (or two) little or no verification progress is made.

Fortunately there is a solution that can detect such disasters when they occur. By combining automated assertion generation with functional coverage tracking during block-level and IP-level verification (Figure 1), engineers can track coverage, including changes in coverage, in real-time. Armed with this data, engineers can immediately detect a loss in coverage. As an added bonus, real-time coverage data also means when things are working correctly, the data can be used to guide engineers toward their ultimate goal of 100% coverage even faster.

atrenta1
Figure 1: BugScope IP-Development Flow with Progressive

To make this work requires that the assertion generation is comprehensive. That means it has to be white-box based (looking internally throughout the block/IP) and automatically generated. Automation is required here because no engineer today has enough time (or desire) to write enough assertions to effectively capture design intent. That’s why most manually written assertions today focus on small subsets of the design (interfaces, state machines, etc.)

The other key technology is automated real-time tracking. With automated assertion generation, automated coverage tracking is simply a matter of generating new assertions each time a regression is run, and immediately tracking what changed in a bar chart or dashboard. This “progressive” approach to coverage tracking makes error detection simply a matter of glancing at a chart (Figure 2) followed by a quick scan (10 to 20 minutes) of the properties when something looks suspicious (such as loss of coverage). In addition to catastrophic events, such technology can detect when there’s a lack of functional coverage progress – alerting engineers that maybe they need to adjust randomization or add targeted tests to their regression.

atrenta2
Figure 2: BugScope Progressive Dashboard

With an IP-Development flow that includes real-time coverage tracking, verification engineers can focus more on getting to RTL signoff on schedule, and less on passing their scuba certification.



Leave a Reply


(Note: This name will be displayed publicly)