Experts At The Table: The Power Problem

Last of three parts: Who will create power models, peak vs. sustained power, and power envelopes and their impact on performance and functionality.

popularity

By Ed Sperling
Low-Power Engineering sat down to discuss the issues in low-power design with Vic Kulkarni, general manager and senior vice president of the RTL business unit, Apache Design Solutions; Pete Hardee, solutions marketing manager at Cadence; Bernard Murphy, chief technology officer at Atrenta, and Bhavna Agrawal, manager of circuit design automation at IBM. What follows are excerpts of that conversation.

LPE: How does this industry create models? No one is thinking about thermal over time or performance over time.
Agrawal: People have modeled temperatures for a long time. When they model corners they model temperatures. One of the major impacts over time is temperature.

LPE: But typically that’s a relationship of, ‘If you do this then this will occur over time.’ What we’re talking about here isn’t a standard corner.
Hardee: A better way to describe it might be vectors. You need to take a wide variety of system modes so you get all the different power situations that can happen. Intermittent peak power may not be enough to cause thermal issues because the peaks may not be high enough. But close to peak may cause a thermal issue. As long as you have the vectors for those cases, with power analysis tools we’ll find those cases.
Kulkarni: That’s especially true when you have islands of currents. That’s where you see the huge rush currents and peak power. You also need to look ahead of it and behind it in sustained power. That way you can address electromigration and the grid design. But that is missing sometimes in the stimulus. To excite those conditions is critical for the functional vector.
Murphy: I have seen some people doing heat diffusion equations where they are trying to model how the temperature evolves in the die. Then you have an even more complex problem.
Agrawal: But that’s coming. It has to go hand in hand with power management on a chip.
Murphy: If you don’t do it you have to bound everything by an envelope that says, ‘It could go here.’ That’s not optimal.
Agrawal: We really need better power models. We don’t have them.

LPE: Who’s going to create them? Is it the tools vendors, the foundries or the chipmakers?
Agrawal: At IBM we have been struggling with this so we began creating our own power models. But we’d be more than happy to standardize these models if we could get other vendors to work on them with us. What will happen is the people who need them will create them, and eventually we will see some standard models come out of this.
Hardee: It’s the kind of model you’d expect to see in a standard cell library.
Agrawal: A standard cell library all the way up to the ESL level. You have to have models at all levels.
Kulkarni: That model would need to contain net capacitances, cell types, cell inferencing, and some of the thermal effects as you’re going up and up. Capturing those at a higher level of abstraction would be a challenge. Power is where timing was 15 years ago. Just like we created timing models and area optimization curves, that took a few years as an industry. This is now coming together where a lot of the physical effects are getting captured from a power point of view. But power is not just capacitance, either, from a model standpoint. That’s where we saw a lot of cell inferencing and clock domains and various clock trees put into a higher level of abstraction and then into RTL. Otherwise it’s only implementation and verification and we head completely off from the power goals. Many techniques we use today in the implementation world really are directed toward verifying the designer’s intent. But the designer may be off from the goal.
Murphy: There are many challenges in power, but one is modeling the modal aspect. It’s not just about the technology. It’s how you use that cell. If you take a video codec, the power consumption is a very complex function of how that codec is used. Do you wrap an envelope around it and say, ‘It’s going to be somewhere in here?’ If you do that you accept a fairly significant level of inaccuracy.

LPE: You can build in inefficiency by setting that parameter too high, right?
Murphy: Yes.
Agrawal: I don’t think you can talk about power without talking about what you’re going to use that power for. If you’re looking at that codec and you’re worried about the thermal effects, you’re just looking at the average. If you’re looking at the instantaneous power, you would look at every single thing that’s happening. You have to define it differently. Power varies widely, but we have to define why we’re using it.
Murphy: That comes back to the application. How are you going to use it?
Kulkarni: That makes standardization a problem. It raises a question that will require the cooperation of a lot of people, from the foundries to EDA vendors to end users. They need to define what the power model will need to contain, including modes of operation and fundamental technology. People are stuck at 40nm and below, which is why they’re creating these capacitance models.
Hardee: And as you go through the flow, the need for modeling and the expectation people have for modeling need to change. If we can provide models that are good enough to do relative power measurement—if you’re looking at microarchitecture decisions, is this one better than that one—that may be good enough at a high level. Certainly before signoff you want a very good absolute power measurement because you want to make sure you’re meeting the spec. At the ESL level we may be quite a long way from technology that can really get to an accurate power number. But at least if we can provide enough relative accuracy for architectural tradeoffs, that would be a good start.

LPE: Are the biggest gains to be made in power at existing nodes or at the newest nodes.
Agrawal: It’s the functionality in a power envelope. That will be the benchmark. This is the power budget I have. Tell me what you can put in a chip. Is this amount of functionality possible in this given power budget?

LPE: Then does the process node matter?
Agrawal: No, but people are used to performance improvements, which is why they may go to the next node to get a slightly improved performance at a lower voltage.
Kulkarni: The battle will be played on power per MHz, at least for the consumer market.
Agrawal: And functionality.
Kulkarni: Yes. That’s why the mobile industry is working so hard. It’s not just more functions on a mobile phone. It’s creating a user experience. Everything over 3 watts becomes uncomfortable in your hand over time. That’s why all these standards are just under 3 watts. WiMax, 802.x and others will be at 2.4 to 2.6 watts. Many customers I talk with say 20 milliwatts will increase market share. That’s very good for this industry.
Hardee: In non-mobile applications we’re hearing the reciprocal of that. People are talking performance per watt.
Kulkarni: We have one customer doing flat panel TVs. We wanted to know why they wanted to lower power in set-top boxes—or set-behind boxes, more accurately. They said that if you’re watching a movie on your TV and it’s silent and then suddenly you hear the cooling fan, it would not be a good experience.
Hardee: These thermal issues also affect reliability. That’s why these set-top boxes keep failing.

LPE: But aren’t we getting to the point where there is enough performance for most applications?
Agrawal: Yes, and the frequencies are starting to level off. Gigahertz are gone.



Leave a Reply


(Note: This name will be displayed publicly)