Adventures In Verification

There are plenty of tools, but knowing how and when to use them, and what pieces of a design are critical, isn’t so simple.

popularity

By Ed Sperling
Design complexity can be almost bit-mapped with verification complexity. There are so many things that need to be verified in a design these days that full coverage has become almost possible to guarantee.

That has created a market for tools to help with the verification process—formal, functional, physical—and different methodologies for using those tools. But how to apply which tools and methodologies and at what point in the design flow is becoming a serious problem. Software, in particular, goes out the door with the understanding that it may never be fully verified and debugged, which is why there are so many “critical” updates. Even Apple, which is one of the most vertically integrated companies on the planet, regularly releases software updates to fix hardware issues such as power and communication.

Hardware debugging is no less complex, and in many cases that carries over to the software. There are so many IP blocks, possible data paths and power islands that divide and conquer strategies no longer are sufficient. Verification has to start at higher levels of abstraction, work down into the guts of an IC, and then be reflected back up to the highest level to understand what’s going on across the entire system. So how do you choose the right tool at the right time?

“You’re not thinking about formal or subsystem or constrained random,” said Harry Foster, chief verification scientist at Mentor Graphics. “It’s really important to identify what needs to be verified. If you do it right, everything falls into place.”

At least that’s the goal, and experience counts for a lot in verification. Seasoned verification teams are much more effective than inexperienced ones, both in terms of solving issues and doing it quickly. They also understand what needs to be verified and what needs to be validated.

“There’s a lot of confusion about validation versus verification,” said Foster. “Validation involves building the right product. Verification is about building the product right. Validation is the customer requirement. Verification is where you deal with corner cases.”

Choosing a methodology
Where you start in verification is a matter of debate. Some say the starting point is all the way at the architectural level. Others say it’s at the block or IP level. Still others say it starts with the tools and works backward. But no matter which avenue verification engineering teams take—and there is no clear right answer—it does require a i methodology. That has to be chosen, as well.

Two of the main methodological approaches are hierarchical and flat. And just to make matters more confusing, they’re not mutually exclusive. Pranav Ashar, CTO of Real Intent, favors a hierarchical methodology—but with links back to the block level.

“If you use a bottom-up approach, you leave information behind,” said Ashar. “If you abstract it out, it can lead to false positives and negatives. They might get reported a high level, but there is less correlation with the signal and system constraints, which leads to noise. The same is true in formal and static. Tools need to be scalable because they can cause errors to be made if it’s not done at the full-chip level. But analysis at the full-chip level does not preclude finding bugs at the block level. You can marry the two of these. There is a middle level.”

When pinned down on this issue, almost all experts agree: One size does not fit all. One verification flow doesn’t work everywhere. And perhaps even more important, everyone has their own philosophy about what works best.

“If you had unlimited time and money you’d use everything available to you,” said Michael Sanie, director of verification marketing at Synopsys. “But in certain segments, schedule is king. You may have three months to verify an SoC and you need to have confidence in your coverage. That job is different than if you have enough time. Ten years ago designs were processor bound, and up to five years ago they were relatively simple. It was possible to get 100% coverage. Now your coverage may be only 95% or as low as 80%, depending on what you’re verifying.”

Application is critical in this area. For example, you wouldn’t want an LTE modem failing, but you probably have lots of tolerance for a dropped call—particularly if it can be addressed later in the verification process through software. That affects methodology, as well. In some SoCs it may be more essential to verify subsystems and blocks that are considered critical than an entire system, while in others a more system-level approach to verification may be essential.

Different levels of coverage
“The block where you can’t have any bugs you have to worry about at the very beginning of the design process,” said Sanie. “But if you’re looking for 90% coverage, what do you use? That depends on the type of design, too. Is it software-driven? With an FPGA, you can see the actual software running on the chip. With simulation today, you can it with your own testing or randomized testbench and then measure how much of the design has been met. But 90% coverage requires a lot of simulation. There are ways to improve that. One is verification planning. A second is blocks with metrics and randomization with constrained random. And with IP, you first pick a vendor you can trust and run your own test on it.”

But even choosing the right level of coverage requires the right approach. That involves another set of decisions by the verification team.

“If a block is too difficult to verify using simulation, you may be better off defining the features that need to be verified,” said Mentor’s Foster. “But if you haven’t thought that through ahead of time, it’s double the work. That’s typically what we see going on among our customers. It’s more difficult with software and hardware teams and with SoCs, because a lot of companies do not have expertise in systems. When it comes to power, you may need two separate teams, one for functional verification and one for physical verification. We’re starting to see the physical world meeting the functional world. In the 1980s, timing and functionality were done at the same time. In the 1990s, they were separated. And now, with power, they’re coming back together.”

The future
Verification still commands the lion’s share of the NRE in a design. Some vendors point to numbers as high as 80% of total NRE spent on verification. How much or that can be shaved off by considering verification up front is not well understood, but almost everyone agrees it can’t hurt.

Pain always has been a driver in the IC design business. When it takes too much effort or too much time, both of which can be translated into dollars, chipmakers are far more willing to spend money on new tools and take the time to learn them more effectively. At least on the verification side, the market for new and better tools doesn’t seem to be in danger of shrinking anytime soon. At 14nm, with double patterning, finFETs and a host of complex routing issues and physical effects, verification will be even tougher. And in stacked die, verification will be even tougher because two known good die together may no longer be functional.

But along with that, most EDA vendors believe that education will be required to understand the tradeoffs between doing verification at the block level, the system level, and in hardware and software. Design teams also need to understand what tools will work best in what situations, when to use what methodologies, and when to think like system engineers versus design engineers or software engineers. Just as no one methodology or tool fits all designs, no single classification will fit all projects.

“There is no one-size-fits-all solution,” concluded Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence. “You need to attack it from all angles. At the bottom, the IP pieces need to be cleaner. That’s a given. Then there are never enough cycles to execute verification. That’s why we’re working on different or smarter ways to execute. Even though formal is great, you’ll never rely on that alone because you want to see the thing do something. That’s why the hardware-assisted verification markets are so popular. And then from the top of that, it’s more and more difficult to define the cases in which the device is supposed to work. And then you have do something virtual in RTL, you put it together, and then you have the chip. Everything has to be overlaid.”



Leave a Reply


(Note: This name will be displayed publicly)