Experts At The Table: Low-Power Verification

Second of three parts: The impact of IP on power verification; finding the right abstraction level for the right portion of verification; more checks to do; technological limitations on speed.

popularity

Low-Power Engineering sat down to discuss the problems of identifying and verifying power issues with Barry Pangrle, solutions architect for low-power design at Mentor Graphics; Krishna Balachandran, director of low-power verification marketing at Synopsys; Kalar Rajendiran, senior director of marketing at eSilicon; Will Ruby, senior director of technical sales and support at Apache Design; and Lauro Rizzatti, general manager of EVE-USA. What follows are excerpts of that conversation.

LPE: How does IP affect verification from a power perspective?
Pangrle: In the old days you’d see processor cores quoting watts per megahertz. When you’re operating at a standard clock frequency and voltage the whole time there is no other state that’s at least a reasonable approximation. Now you look at the levels of complexity going on above that with multicore designs that may be operating at different voltage levels or different clock frequencies. Watts per megahertz goes right out the window. There is a challenge in how you get that kind of information. If I’m a designer working in an SoC I’m trying to figure out what is my lowest energy core to put in there and what voltage and what frequency you want to run. How do you make these tradeoffs if you don’t have any data to work with? And then, once you have this, how do you hook it all together. You need to give that to the customer who’s going to use all this IP. That can be a third-party IP from outside or IP that’s developed in-house that you want to re-use from project to project. How do you keep that information so the next designer who picks that up knows how to use it? That’s complicated further by the fact that a lot of control for these SoCs is moving out of hardware and into software. From a verification standpoint you need some environment where you can bring in the software and get an idea. Often what’s going to determine the power is the software running on top of it. With phones we see instances with software where you download an update and your battery life gets a lot better—or it gets worse.

LPE: We’re used to verifying pieces of this, but now we’re dealing with interactions, as well. Don’t we have to raise up the level of verification?
Pangrle: You do. There’s almost a renewed interest in the high-speed emulation. Clock frequencies, especially for x86 processors that are used for most simulation, have stalled out since about 2004. We’re getting access to more cores, and if you’re doing regression tests that’s good. But if you’ve got software that you need to run and you can’t break that up, then the reality is that single-threaded performance hasn’t increased much. We’re seeing a lot of interest in higher-speed simulation and verification to help address that problem.
Rizzatti: Raising the level of abstraction won’t do any good in computing power consumption and verifying power domains because you don’t have the accuracy. Here we are really talking about cycle accuracy. Emulation is the alternative. It’s cycle accurate but you have performance that is orders of magnitude faster than a simulator. You can simulate the interaction of the software with the hardware and verify power domains turning on and off. There is lots of interest here.
Rajendiran: With chips becoming more complex and larger, if you have 100 power islands, at any time you may have one or two operating and the rest you want to put into a lower power state. So how do you know those things have been turned down to a low power state? That becomes a verification challenge. The more domains and islands you have, the bigger the challenge to make sure your intent is implemented at the chip level.
Ruby: Power verification has many different aspects. There is power intent verification. There is power consumption verification. There’s probably also something to be said about power integrity verification. Test power is orders of magnitude higher than functional power because all of the low-power techniques and clock gating are not working in the test mode. Everything is open and running at the same time. You have to verify physical power integrity, as well. Verification in the past used to be RTL or something above that. For power it needs to be done at multiple levels, from the system level to RTL. And what do you feel comfortable with? If you do RTL emulation you can understand the hardware behavior of your design. The question is how does software drive that hardware. Can you establish that kind of behavior in accelerators, or do you need to bring the hardware up a level and truly make it work with the software? Based on the RTL description we can estimate power consumption. The further you move up in levels of abstraction the more you lose. At the gate level you get good accuracy. As you move up to the RTL you start losing accuracy. As you move up to system level you may lose so much accuracy that the analogy becomes useless. There is a tradeoff here between the types of verification you do at the higher level, including power consumption, and the turnaround time that’s required.

LPE: Aren’t we changing verification, though? We’re taking it both up in abstraction and down to the most basic level.
Pangrle: It isn’t an either/or issue. It’s an ‘and.’ You need to be checking this at multiple levels. You can’t ignore one level and hope everything will work out okay. You have to make sure you’re accounting for all the different scenarios a complex SoC may run under. One thing that gets challenging is that if you’re looking at running software on a virtual prototype or some type of accelerated environment, you can’t run that many cycles when you’re looking at what’s happening on the power grid. Only certain pieces are of interest to you. When it’s doing what it’s supposed to be doing most of the time, that’s probably not where it’s going to fail.
Balachandran: Software verification is important. A lot of it is also done with methodology. The verification methodology is changing compared to non-low-power verification. They also have a link to SystemC simulation in their environments. And we are seeing customers with a renewed vigor to do gate-level simulations. Some customers now have a methodology of not taping out their chip without doing more extensive gate-level simulations for the power. The low power has caused a renewed focus in that area.
Rajendiran: When you are splitting microwatts and nanowatts, you have to go to that level. RTL was great 20 years ago because you were doing simple blocks with 20,000 gates. There are so many libraries at the same process node and for the same flavor that it really becomes very critical which library you pick. That can have a huge swing factor for a mobile device. If you have a chip verified in this process at this foundry but it’s not at a price point you like, you can’t just move it. Gate-level is becoming a lot more important at that perspective.
Rizzatti: A simulator running at the gate level will take forever. We’re talking many years.
Ruby: We need to do this with multiple levels. But one of the key elements of the solution involves technology-dependent models that can take you through the levels of abstraction. There may be multiple models for multiple levels, but you may want to say you’ve done this design and you know what the power profile is and you can measure all the way down to gate or transistor level. You can then abstract the model for that block, push it up several levels of abstraction, and still maintain as much accuracy as is practically possible. You can do this because it’s not being done from scratch. If you don’t a netlist and you don’t have a post-synthesis result, you can still estimate power consumption based on RTL and create an RTL power model. Technology-dependent modeling, not milliwatts per megahertz, will be the key to this puzzle.



Leave a Reply


(Note: This name will be displayed publicly)