Experts At The Table: Low-Power Verification

Last of three parts: What’s still missing in UPF 1801; who’s worrying about power; getting more proactive about dealing with low power issues; leakage vs. dynamic power.


By Ed Sperling
Low-Power/High-Performance Engineering sat down to discuss power format changes with Sushma Hoonavera-Prasad, design engineer in Broadcom’s mobile platform group; John Biggs, consultant engineer for R&D and co-founder of ARM; Erich Marschner, product marketing manager at Mentor Graphics; Qi Wang, technical marketing group director at Cadence; and Jeffrey Lee, corporate application engineer at Synopsys. What follows are excerpts of that conversation.

LPHP: Do verification teams deal with power issues early enough?
Marschner: There are two classes of customers I deal with. One class is still doing power verification at the back end and throwing it over the wall to the RTL guys, who have to deal with it. There is a second and growing class, which is dealing with power at the RTL end and throwing it over the wall the other way. There’s a transition we’re seeing as the whole concept of specifying power earlier is beginning to take hold. They’re seeing the value of doing it earlier and reaping the benefits of early verification and analysis.
Biggs: There may be a Catch-22 situation here. People aren’t doing it earlier in the design flow because they don’t have the tools and the technology to make that easy. Maybe standards like UPF 1801, plus support from the EDA industry will make it easier to move power as a prime design constraint further up the design flow—beyond RTL and up into the system level.
Hoonavera-Prasad: People are more than happy to receive a power intent that is already well defined. They don’t have to worry about the details about how many domains there are, what the controls are, what’s the functional behavior, what’s the dynamic behavior. It’s constructive to have power intent early on. The limitation from a user perspective is that the power intent is good to capture leakage management techniques, but when it comes to dynamic power—annotating power factors and doing early power estimation—the standard is still lacking. We have a focus now to see how we can improve the standard in estimating and coming up with models for power consumption.

LPHP: So what is EDA doing about it?
Lee: We need to move system power into the standard so that everyone who is designing will be able to get that early power estimation and use it in a common language.
Wang: This is truly a new methodology for low-power designs, but it will require people to pay some costs. In the past six to seven years, the industry has changed. Customers can learn the hard way or the easy way. The hard way is you can tape out a chip failure. The easy way is to become more proactive and be educated. I’m seeing more and more people in the second camp. They want to proactively address low-power issues. At the same time, we need to develop tools to address the challenges that come up from the actual design.

LPHP: The hardware guys have been working with power for a long time, but IP and software are far less efficient. How do we deal with that?
Hoonavera-Prasad: This is a very important topic for system design companies. There are multiple CPUs and an operating system to deal with the hooks that operating system gives you, but beyond that you’re constrained with what you’re using. You have to deal with processor architectures like big.LITTLE, manage the context switching, manage power gating dynamically with dual-core balancing. This is something you cannot capture in the standard. The standard will give you a static view of the design. But what you need to do dynamically to manage power and prevent a thermal event is a dynamic problem. We need software co-simulation. That’s been important—being able to optimize the software ahead of time. You cannot do software after the chip tapes out anymore. That luxury is lost. You have to start planning for software development early on. We talk to software guys and educate them about power switches and isolation, and they have to understand that some of the pain may be related to their lack of understanding about power. The concept of all these power management features and how they affect a software use case have to be understood. That’s something that can be done through emulation.

LPHP: It’s difficult for customers to understand sometimes that certain parts need to be off to fit into a power budget. How do you deal with that?
Hoonavera-Prasad: It’s not always possible for us to talk to the customer about how these use cases have to be created, but it’s much easier on the mobile side. There may be 10 discrete cases we get, such as MP3 playback, video playback, Angry Birds. You have 10 benchmarks you’re graded on. You can Google it and you have the entire competitive landscape. Once you have that system-level budget in mind, you need to plan for all these different use cases and do use-case analysis. UPF will not help with system-level analysis. It’s not just a leakage problem. It’s also dynamic activity, like when a window is turned on to do housekeeping activities. You get a spreadsheet with all the major IP blocks that affect power, and then you analyze them by adding an activity factor to it, and you compute power numbers based on that. Then, you use that information to pass onto software teams so they can figure out how to manage the cores. For mobile phones, the easiest use case is deep sleep. You can power gate it off so that only a small sliver of logic is always on. With that in mind, we know how many power domains to create, everything that needs to be shut off, and what voltage needs to be applied. It’s not just a simple answer. It’s a system design problem. We have to extend UPF to account for these dynamic activities, which it doesn’t address today.
Biggs: We put a label on that. It’s ‘Dark silicon.’ You have the budget to build transistors in your area but you don’t have the power budget to light them all up.

Leave a Reply

(Note: This name will be displayed publicly)