Last of three parts: The myriad effects of low power, the differences between OVM and VMM, and establishing good coverage models.
By Ed Sperling
Low-Power Engineering sat down to discuss rising complexity and its effects on verification with Barry Pangrle, solutions architect for low power design and verification at Mentor Graphics; Tom Borgstrom, director of solutions marketing at Synopsys; Lauro Rizzatti, vice president of worldwide marketing at EVE, and Prakash Narain, president and CEO Real Intent. What follows are excerpts of that conversation.
LPE: With multiple power islands and states, how do you ensure your coverage model is complete?
Borgstrom: The overall verification challenge explodes when you get into some of these low-power design styles. It’s not just coverage. It’s also catching the bugs in the first place. Our surveys show more than half the companies are using some sort of low-power design technique. Where we saw a couple of years ago it was one or two power islands, more common today is five or six. We’ve seen designs with up to 30 different islands. In the past, you could do ad hoc techniques. With even 5 or 6 power domains, you need some automation in there, whether it’s UPF-driven flows or multi-voltage simulation. This isn’t just for analog-like verification, either. It’s functional verification that the design actually works.
Pangrle: This is a trend that will continue, too. The amount of circuitry you can put on a chip is continuing to scale. People are using that to put on more processors and more cores. AMD came out with its first Opteron in 2003 and now they’re up to six cores. It doesn’t look like AMD and Intel are about to stop adding more cores. These chips are going into servers and often there are situations where some of these cores are idle. They’re all being measured by how much power they’re consuming in an idle mode. At that point, each one of those cores becomes a candidate for a power island so you can throttle it back or shut it down. The number of islands is just going to continue to scale. Having the process from the beginning, and looking at how you’re going to partition it with a format like UPF to track that information as it’s going through the design flow opens it up for the tools to look at what’s stored there and what the power intent is. That allows you to develop tests around it so you can make sure you’re verifying those different cases and different modes. But the reality is you won’t have all those states operating at the same time, so you can specify these are the allowed modes at one time.
Borgstrom: This also requires a shift in methodology and in how people go about verifying these low-power designs. What are the best practices in architecting a verification environment for low-power designs? You need to make sure you verify the power management unit and all the power transitions. One of the things we’ve done in collaboration with ARM and Renesas is write a book on low-power verification techniques called “VMM for Low Power.”
Rizzatti: The road map from Intel shows that by 2011 it will have 4 billion gates and 128 cores.
LPE: That’s the Larrabee chip, right?
Pangrle: Yes. And Nvidia has 512 cores. It’s 16 streaming processors, each with 32 cores.
LPE: But with low power, all of this has to be designed up front. Does verification need to be considered up front, as well?
Narain: Architects have to consider performance, timing and power.
LPE: But it’s also a business case that it has to come out the door on time, right? Do we need verification IP?
Borgstrom: More and more, verification is becoming a limiting factor on the scale and scope of the most complex designs. We’ve talked with customers that have scaled back the functionality of their designs and rejected changes in their design because of the impact that will have on verification schedules.
LPE: You mentioned VMM, and there’s been a lot of talk about how that stacks up against OVM. Does it matter which methodology verification engineers use?
Pangrle: Open standards matter. Mentor has donated technology for UCIS (Unified Coverage Interoperability Standard) and everyone has access to the UCDB (Unified Coverage Database) work that we have put together in terms of helping track information on coverage. Having that kind of format in the industry where you have the freedom to go from one vendor to another helps speed the adoption of these new technologies. If you’re using the tools from only one vendor then you’re at risk because if anything happens to that vendor you’re stuck. If you have a choice, you’ve got options down the road.
Borgstrom: The debate sometimes gets a bit tiresome. The industry seems to love a controversy. The VMM came out in 2005, so it’s been in production for five years. It has more than 500 successful tapeouts, lots of companies using it, and it has evolved and expanded since we first launched it. We’ve had quite a lot of interaction with our customers around methodology as we develop and enhance this. One thing we’ve heard is that customers want a single industry standard methodology that’s driven by an open standards body so they can have interoperable verification environments. When we first released VMM it was the first open methodology with a specification. We published details on the library and the methodology. We then open sourced the library and the applications built on top of it.
Pangrle: Sometimes you guys have nasty ties when you download your software, though. There are strings attached.
Borgstrom: It’s a standard Apache 2.0 license. There are no strings attached.
Pangrle: The .lib parsers supposedly are open, but there are statements in the access language about if there’s ever any dispute arising between the two companies then you immediately lose access.
Borgstrom: I thought we were talking about verification here. I’m not the right person to talk to about .lib. But VMM is available under Apache 2.0. In any event, there are two methodologies that have gotten attention. The cry I hear is from users who want one standard methodology and get on with innovating.
Pangrle: Does that mean Synopsys is going to support OVM?
Borgstrom: We support developing an open industry standard driven by an industry organization like Accellera. There has been great work done by the Accellera subcommittee. The next step will be to come up with a common base class library that will go a long way toward bringing unity and progress here.
LPE: What do the other participants think about this?
Narain: We’re neutral. We don’t have a stake here.
Rizzatti: We run a survey each year, and one of the questions is VMM vs. OVM. VMM is ahead of OVM in terms of checkmarks by visitors, but it’s not by much.
LPE: What’s the next big challenge for verification? Is it complexity, is it integration?
Narain: It’s the cost. And verification is such a multidimensional problem today that no one way is going to solve it. The only way to deal with this is to break it up into its pieces and to have the most cost-effective solution for each piece. One of the biggest problems today is simulation. We’re trying to throw everything into the simulation cauldron. We have to find a way around simulation. That’s where formal technology becomes important. But formal is an engine. If you don’t package it properly it’s useless.
Pangrle: It really is a cost-driven process in the end. People are trying to figure out how to get chips out the door with the least amount of expense and in the least amount of time. It really is more than just a point tool solution. Having tools that can go through and automatically look at the testbenches and vectors you’re running for your verification can improve the coverage with a fraction of the tests. Rather than just looking at how fast two simulators are, if you look at the whole system you can cut down the number of tools and get better quality results.
Rizzatti: Part of it is cost, but part of it is saving respins and not being late to market. If you’re two months late you can lose one-third of your revenue. That’s hundreds of millions of dollars.
Borgstrom: Two of the biggest drivers for verification are cost and software. They’re related, and they’re being driven by the complexity of devices today. What’s important is that there are different types of verification done at different phases. Whether it’s algorithm analysis or transaction-level modeling or RTL simulation or analog/mixed signal simulation or hardware-software validation on a virtual prototype—all of those have to work together in a flow. Being able to successfully transition from one phase to the next and making sure all the tools work together is really important.
Leave a Reply