The Growing Confidence Gap In Verification

Increasing complexity, interactions with software, physical effects, and the integration of third-party IP are all conspiring to make verification even tougher.


By Ed Sperling
It’s no surprise that verification is getting more difficult at each new process node. What’s less obvious is just how deep into organizations the job of verifying SoCs and ASICs now extends.

Functional verification used to be a well-defined job at the back end of the design flow. It has evolved into a multi-dimensional, multi-group challenge, beginning at the earliest stages of the design process and reaching well beyond the actual manufacturing of an IC. Moreover, the challenge has become so enormous—from block to system level—that keeping the same level of confidence that a chip will work reliably when it reaches tapeout is almost impossible to maintain.

“What’s changed is that we’re dealing with SoCs instead of custom ASICs, where there was not as much third-party IP,” said Jason Pecor, technical solutions manager at Open-Silicon. “Now we’ve got processor cores, peripherals and third-party IP blocks. That makes it more of an industry challenge. We need to know what we can expect from IP vendors, because that really impacts how we do verification.”

He said there is far more interplay between different groups at Open-Silicon than ever before, and within those groups there is at least some level of verification going on. In addition, there are more verification engineers throughout the entire design process than several years ago.

Quality time
There is no clear number about the percentage of time that is spent in verifying a chip these days. While the numbers typically range from 50% to 80%, the actual time spent is much fuzzier. For example, is transaction-level modeling part of verification or is it part of the front-end design? And is early software prototyping silicon validation if some of the information is fed back into the design flow?

The primary goal throughout this isn’t necessarily to reduce the percentage of time spent doing verification. It’s to cut out all the unnecessary cycles and operations so that more time can be spent where it matters—improving coverage and reliability and having time to run extra tests where there might be problems. One of the biggest improvements lately has come in the area of debugging, in part by raising the level of abstraction and in part by understanding how some bugs impact the effectiveness of the debugging tools.

“The big problem we see in functional verification is that there is a class of bugs that cannot be demonstrated on an RTL model,” said Harry Foster, principal engineer for design verification and test at Mentor Graphics. “You may never test them in RTL simulation, and while you can find them at the gate level that’s too late.”

Mentor’s approach has been to add formal technology under the hood of its verification platform to identify where compute cycles are being wasted and remove them from the rest of the verification process. The idea is to shield verification engineers from writing assertions, because it’s a skill that frequently is difficult to learn and apply for many engineers despite the fact that it’s the best way to really drill down on bugs. That doesn’t mean engineers can’t reap the benefits of assertions, however, thereby keeping the verification process moving at maximum speed. In that way the bugs that are discovered with the formal tools can be dealt with separately rather than slowing down the overall verification platform. In effect, the formal tools are creating automated exclusions.

“One of the challenges is that 94% of designs have asynchronous clock domains, and that gets worse as you integrate more IP,” said Foster. “They may show up once a day, or once a week, or maybe once a month. And some parts of the design simply are not reachable at all. We talked with one company that spent three weeks trying to weed out unreachable code from the verification process and they had to manually write exclusions.”

Another way of dealing with that is to do root-cause analysis faster and automated bug detection. Synopsys has approached it this way, basically raising the level of abstraction to find bugs and then do a deep dive afterward.

“Right now debug takes between 35% and 50% of the whole verification cycle,” said Michael Sanie, director of verification product marketing at Synopsys. “But the challenge is that no two companies approach it the same way. In the same city there are two networking companies. One told me that they don’t believe in FPGA prototyping and do everything with constrained random testing. Another told me they get things stable and then put it onto an FPGA prototype. These are totally different approaches, and our view is that this is becoming so complex that you need an arsenal. There is no right solution. You need everything at your disposal.”

Hierarchy of needs
Part of what makes verification so difficult is that it is no longer a single job. It’s many jobs under a broad heading.

“When you look at what we verify, it’s a combination of hardware, software, hardware and software together—basically the system integration—and performance, which in the past we did with performance analysis tools,” said Frank Schirrmeister, group director for product marketing for system development at Cadence. “The challenge is that you need to be careful what engine you use for what scope. On the larger scope you are not going to just verify a component itself. You need context. And in that context, something like performance validation becomes very challenging.”

Schirrmeister noted that confidence levels in coverage models are getting shakier than they used to be. At 20nm, he said, the real issue is how to get to a sufficient confidence level more quickly. “You will need to be able to combine engines to do that, with more subcomponents verified and more automation. It’s not going to stop things, but it will get harder—and it may take longer to get to a sufficient confidence level. That’s one of the reasons emulation is so hot right now. You need to execute more cycles.”

This is showing up at all levels of the verification process, though. Complexity is fueling the need for more verification engineers everywhere, and the more tools in their arsenal the better.

“We’re dealing with a sea of interfaces, which are creating a lot of asynchronous communications,” said Pranav Ashar, chief technology officer at Real Intent. “This is all about the interaction of functionality and timing. A lot of the respins we’re encountering are being caused by asynchronous clock domain crossing issues. A second area where we’re seeing problems is with power optimization—turning power on and off. The interaction between blocks is key.”

This is made all the more challenging, of course, by the increased amount of IP being used. One approach to dealing with that is to use verification IP, which partly explains Cadence’s purchase of Denali. While this used to be given away for free in the past, companies are now willing to pay for it because it solves an important problem and saves them engineering resources.

But the interaction of IP, software drivers and the hardware also has to be optimized to levels far greater than ever before in order to improve battery life and performance.

“There’s far greater interplay between drivers and components than ever before,” said Ashar. “Over time, you have to catch the hardware issues earlier, and to make the integration of components more effective the EDA tools providers have to work more closely with the IP providers.

In many cases, the level of verification required is application-dependent. A smart phone, for example, typically goes out the door with lots of bugs and gets fixed with software updates. A missile guidance system, in contrast, goes out the door with as few bugs as possible because there is virtually no possibility of in-field updates.

Nevertheless, the key is to get a product out the door on time, as bug free as necessary, and for the lowest NRE cost. That’s more difficult as more components need to be considered, and the training required and the size of verification teams has increased in most major design houses.

“We’ve seen big improvements in areas like emulation, VIP and debug,” said Synopsys’ Sanie. “But the big companies also now have two times more verification engineers than design engineers.”

And still the complexity and challenges in verification continue to grow. Better tools, better coverage metrics and better training all help—and the advances here have been huge—but no one really knows for sure if the semiconductor design industry is truly making progress, just keeping pace, or slipping under a rising tide of complexity.

Leave a Reply

(Note: This name will be displayed publicly)