The Growing Verification Challenge

Keeping up with complexity is forcing companies to make some unusual shifts in their coverage and methodology. Some pieces are still missing.

popularity

As complexity continues to mount in designing SoCs, so does the challenge of verifying them within the same time window and using the same compute and engineering resources.

Chipmakers aren’t always successful at this. In many cases they have to put more engineers on the verification and debug at the tail end of a design to get it out the door on or close to schedule. In many cases that also requires new tools, more hardware-assisted approaches, and more investment for the same return.

The latest thinking is that improving the efficiency of the overall verification effort requires fundamental changes to methodology throughout the design cycle, including:

  • Better use of existing resources;
  • Bigger, pre-integrated and pre-tested IP building blocks;
  • A better understanding of where there are holes, where they are likely to show up, and how to plug them.

Getting granular with resources
Perhaps the most pronounced shift is the idea that you can accomplish more with the same tools by consciously doing less with them. This may sound counterintuitive when the goal is to provide complete coverage, but the reality is that no verification tool can find every bug. Understanding limitations of the tools and focusing verification on what they can legally find means fewer cycles and less time to get the job done, with more time left over to find the remaining bugs using other methods, such as formal.

“The best way to leverage technology is to integrate it all,” said Michael Sanie, senior director of verification marketing at Synopsys. “If you can connect static to formal, you add different levels of verification together. But you also need a methodology on top of it all. Once you do that you can find which coverage is not reachable using legal methods and not go after those. It takes a few weeks to get to 85% coverage, a few more weeks to get to 95% coverage, and it takes months to get to anything beyond that.”

Even with those extra months of work, bugs may not be found using traditional approaches. “People are still doing gate-level simulation like they were 25 years ago, but we’re now dealing with 200 million or more gates, 100 or more power domains, 50 clocks, more interface protocols and more software,” Sanie said. “Each of those is an additional challenge and it requires new technologies.”

Divide, conquer and reassemble
That complexity isn’t just a verification challenge. Designing and developing all of those pieces is a mammoth task. There’s no added value in developing USB and PCIe interfaces, for example, and it’s time consuming. The result has been an explosion in third-party IP, as well as an increased reuse of internally developed IP. But the way to differentiate is sometimes by tweaking that IP, or by putting it together in an unusual way. That makes verification much more challenging, and has led to increasing interest in integrated IP blocks or subsystems, usually with memory, I/O and a processor.

“The pain points are the subsystem and SoC level,” said Steve Bailey, director of emerging technologies for Mentor Graphics’ design verification technology group. “The challenge is that you need to do blocks within the context of where they will be integrated. That’s one of the reasons for software-driven verification.”

Software-driven verification is a hot topic inside of verification circles these days. Mentor, Synopsys and Cadence are each pushing their own version of it, the standout feature being reusable testbenches that can be run in C throughout the design flow, from RTL all the way to silicon.

“With this approach we’re seeing a change in how people operate,” said Bailey. “One company created a vice president of engineering to unify verification and validation. It’s still impossible to do functional verification before you have an FPGA prototype or you actually get silicon back, but you can do more and more up front, and more and more on the functional verification side using this approach. We’re seeing an increase in productivity overall, but you do need a better methodology to make it all work.”

Creating a better methodology is a common theme in verification these days. But that methodology also needs to be tweaked for every new design.

“This is a very tough problem,” said Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence. “Methodology changes from project to project. In the past it was somewhat mechanical. If you did all the necessary steps the design was done. Now, the content of what you’re trying to verify influences the methodology, and you need to build a methodology at every stage and at the beginning, because the way you code things early on impacts whether you can use that data later in the design.”

Sequencing
One concept gaining attention in creating an efficient methodology is sequencing—verifying pieces of the design at the right time and in the right order. IP can be verified even before it is sold or re-used internally by a chipmaker. The same is true for memories and processors. But once all of those pieces are integrated into an SoC, the verification becomes much more complicated because it has to take into account the context, the interconnects and increasingly even the physical effects such as power, heat and ESD.

“There are three main steps that need to be considered,” said Pranav Ashar, chief technology officer at Real Intent. One involves the basic assembly of the IP, so you make sure it’s all implemented correctly, and once you put it together you initialize it using a scheme that’s coherent and efficient. Once that’s in place, there are a number of layers of complexity that are added on, such as asynchronous clock domain crossing. Complex IP is a sea of asynchronous interfaces. So the second step is verifying the complexity to put things together into the SoC. That also requires specific tools for power management.”

The next step is system-level verification. While all of these steps used to be done by one verification team, the latest trend—particularly in large, complex designs—is to segment or sequence that work so that designs can run through simulators and emulators more efficiently.

Aberrations, missing pieces, and future directions
All of this sounds like it should improve things, and it does. But there is no perfect system, no silver bullet, and there probably never will be. For one thing, even though there is a push to add subsystems into SoCs to save on the integration, verifying those subsystems in context isn’t so easy.

While the behavior of a block is well understood, a complex subsystem is basically a mini-SoC—with all the quirks of an SoC. Verifying that in context is every bit as difficult as every other part of an SoC.

“The presence of a microprocessor, which many subsystems have, means that part of the traffic behaves like a CPU with bursts and latency,” said Drew Wingard, chief technology officer at Sonics. “You should be able to do static timing analysis of a subsystem. But we don’t have a good vocabulary for this yet. All we can do is guarantee it will work in these bounds. And with a standard cell library, for one cell you have at least 20 different uses.”

Wingard noted that even subsystems themselves are not always well verified. “If you buy it from Tensilica, they’re on the hook for doing the verification and they’re going to provide the necessary service for you and other customers because they have economies of scale. But with so many more system companies doing ASIC cells again, they’re essentially building subsystems and handing them to the ASIC vendors to integrate into chips. The tables have changed, and the economics are not there to make the subsystem because they may not have a customer for those subsystems again in the future.”

Pieces are missing on the front end of the design process, as well. “We can do what-if analysis with C models, but there is no established path to RTL yet,” said Real Intent’s Ashar. “And if you do verification and you find bugs in RTL it’s hard to map it back to the C model and give debug information at that level.”

In addition, these missing pieces are spilling over in some unusual ways. Mike Gianfagna, vice president of marketing at eSilicon, points to what he terms a “phantom verification problem” with IP and IP subsystems. “How do you ever get to a level of confidence that what you think is right will actually work? This is supposed to be about decreasing risk, but there’s a big hole in the current environment. How do you build chips and know you’ve made the right choices? A lot of times you’re making decisions based on previous decisions, and then you’re betting the company on those decisions.”

Taken as a whole, these issues spell opportunity for EDA vendors, and there are efforts under way by startups as well as the large companies to bridge these gaps and make it easier to define methodologies. But just as chips are getting more complicated, solving these problems is a reflection of what’s happening on the chips. It’s all getting more difficult.



1 comments

The Growing Verification Challenge | The Best O... says:

[…] The Growing Verification Challenge Keeping up with complexity is forcing companies to make some unusual shifts in their coverage and methodology. Some pieces are still missing.  […]

Leave a Reply


(Note: This name will be displayed publicly)