End User Report: The Case For Formalizing Power Modeling

IBM technologists look at what’s needed to create power-aware designs and what standards need to be changed.

popularity

While the industry clearly agrees that power modeling is a necessity for next-generation semiconductor design at the transaction level, what is lacking is a standard way to exchange power models. Low-Power Design talked with David Hathaway, Senior Technical Staff Member at IBM Electronic Design Automation and Nagu Dhanwada, Senior R&D Engineer and Team Lead for Chip Level Power Analysis Tools in the Electronic Design Automation group of IBM’s Systems and Technology Division to discuss power-aware design and power modeling.

By Ann Steffora Mutschler

LPD: What are the most critical issues you are dealing with in terms of low power design?

David Hathaway: In addition to what we are doing here, both Nagu and I are involved with the Si2 Low Power Coalition. I am the Chair of the Technical Steering Group, and Nagu was Chair of the Flows group (which is now on hiatus) where he worked with Jerry Frenkil at Sequence Design. Both of us work in the Modeling Working Group.

One of the key things that is very important in power-aware design (and I hate the term ‘low-power design’ because IBM is designing some servers that use a lot of power and it is very critical to be power-aware and to minimize that power as much as you can, but you can’t quite call them ‘low power’) is that the largest opportunities one has for reducing power are really early in the design phase at the architectural level when one is doing high-level design. I think it is important, therefore, that tools and modeling techniques and modeling languages be able to support making design decisions and doing design analysis and estimation, and what-ifs, in the design phase.

If you talk to people in the industry you’ll find that most people do that, but they do it off on the side in spreadsheets and things they’ve cobbled together themselves. One of the key things we are trying to do with the Low Power Coalition with the Modeling Working Group is to extend existing standards so that they can be useful at these higher levels of design.

LPD: Are you making progress there? What’s the current work like?

Hathaway: It’s always slow when everyone who is involved has a day job and is also working with the coalition. But we have come to some agreement on some of the key issues – not all of them certainly—but some things that we think are important like being able to support non-mutually exclusive power states in models, and having somewhat more flexible ways of parameterizing power models than are available within the Liberty spec.

LPD: To make the shift to architectural level design, what kinds of changes need to happen within design groups?

Hathaway: I think a lot of them have done it, even informally. One of the important things is to try to connect the early design phase to the late design phase. You’ll find that in a lot of cases it’s done with different tools. You transcribe things from a spread sheet, for instance, that you use from early on to the later design. But uou may not keep your early design estimation spreadsheet up to date so you end up with disconnects. Anyone who is doing power-aware design is already thinking about these things. It’s just difficult to do it very formally and it’s difficult to do very much analysis. Nagu has done some work in the past in terms of doing power modeling for Power Architecture, so he has more experience with doing that at a high level.

Nagu Dhanwada: It’s also target-design-dependent to a large extent. When you are going through a typical microprocessor-based methodology where you have previous generations, there are two things to consider. One is the kind of design you are dealing with. The second is the engagement model, specifically, when you are doing some sort of an ASIC for someone else who is going to consume it and who owns the applications side of things. There you are kind of limited in the applications you get in order to really optimize things and you don’t have that much control over what you are optimizing.

On the other hand, when you are doing more of a microprocessor base where a lot of it is derived from earlier generations you have control over the stack, which includes learning on it, but a lot of the methodology is fixed because you are coming with an n+1 generation design.

At least in the space in between, just as IBM licenses the Power Architecture, we try to promote the Power Architecture. With consortiums like Power.org you are kind of sticking together a design using existing IP, which is more of a bus-centric view. So there you have these early models [either internal or licensed] and along with it, we put in high-level power models to go along with these ESL models. Maybe 3 or 4 years back, when TLM was not a standard and there were no standard APIs and such, we were building on some initial models. Tying in a power model to a transaction level functional model was kind of ad-hoc. That was the approach we took when we were initially going ahead. Now, in the context of what David was mentioning about the whole modeling committee and such, where we started to work on it, that kind of started – even the whole notion of the modeling—from discussions Jerry (Frenkil of Sequence Design) and I had in the Flows Work Group (of Si2) where we were saying, ‘We are putting these power-aware flows together starting from ESL. Now, do we want a different kind of a power model for different pieces of IP or can we try to come up with a power model that can span across different levels of abstraction?’ That’s how the Modeling Work Group got started and why David came up with issues about the need for non-mutually-exclusive states. Those are initial things that came out of the working group.

At an early level, if you are really talking more of targeting the consumer electronics space where a lot of design is done with these pre-defined IP blocks and there are emerging standards at the ES level like TLM, I think it’s the right time to also formalize the power modeling aspect and hopefully extend some of the existing formats – like Liberty – to move it to the higher levels and make sure you can use those even at the ES level. It’s more timely now than before because on the functional modeling side, things have crystallized a lot – there are a lot more standards out there.

LPD: How long do you think it will take for the industry to decide on how to formalize power modeling?

Dhanwada: We’ve been doing things like TLM. It’s not TLM, but any processor designer at any company has these early performance models. Like power models, these performance models are typically at some point a spreadsheet. Later on they take their form of executable models. They might not be in System C but they are basically C with threads, and you model your whole architecture way before anything is out there. You also have proprietary simulators. There have been discussions on how you make such proprietary things talk with existing languages, like System C models. We’ve done some work on this—making these talk with System C and TLM, and what it takes and how you can use these models in a bigger context. Those performance models have always existed but we are still at the beginnings of the formalism, or the extensions to existing models, which are not necessarily tied to any language. For someone to be able to use these models there would be some mechanics needed.

Hathaway: I think there are three things that are important considerations as we start talking about introducing the formalism into this. First, we want some separation of concerns. What Nagu described was a situation in which the power modeling was embedded in the performance models. There wasn’t a clean separation of the power model from the simulation model and this is what people often do when they’re trying to do something very quickly – they push everything together. We would like to have separation of concerns because there are more and more things that people are interested in at the high level and you need some modularity.

Second, you need some continuity in the design. What people have typically done (and this is something that came out of the Flows group that Nagu was chairing) is you create your high-level model, get some answers. Then you sort of set it aside and start over from scratch at the lower level, and build something while looking at what you did before and hope you’re building something that matches it. There is no formalism for guaranteeing continuity. As I evolve one piece of my design, having a mix of high level and low level is difficult.

The third thing to realize about power, which is different from things like timing, is that it is used in many different ways. With delay models the rule is you go in, you get a delay. Maybe it is a statistical delay, maybe it’s not – there are things about what parameters are. Basically, you are saying there is a ‘from’ node and there’s a ‘to’ node. How long does it take for a signal to get from point A to point B? For power, you may be concerned about average power; you may be concerned about average power over short periods of time, over long periods of time that drive electromigration. You may be concerned about very local power. You may be concerned about electromigration of a particular via or wire. You may be concerned about overall chip power. Or you may be concerned about waveforms because there is transient A/C power grid noise that you have to worry about so you don’t just care about the power, you care about what the power waveform is.

A benefit, when you’re looking at power that we don’t always exploit in our current models, is that power is much more separable than delay. You can to a very large extent say, ‘This piece of my power came from leakage; this piece of my power came from A/C switching,’ and I can separate those concerns. It is useful to be able to give reports back to the user that gives them the sensitivities to these various things because the power is separable. We’d like to take advantage of that.

When you are talking about the architectural level, optimizing power – although there are some tools that may be able to help you a little – you’re basically talking about a designer decision process and unlike timing where there are millions of billions of paths, and you can’t possibly close them manually, you’re making high level decisions in power and its very important to give the designer feedback so that they can do their own pareto analysis and determine where to get the biggest bang for their buck. To do that, you need to understand the sensitivities.

LPD: Would you like there to be external tools that could be used internally?

Hathaway: I think the fact that we can’t do the things our designers want to do with external tools gives us a competitive advantage by doing analysis with these internal tools. I don’t think it’s the tools per se; I think it’s that we have certain capabilities. Particularly in our ASIC library, we have partnerships with other companies and we would like to be able to exchange IP more easily. We would like to be able to bring in IP that has power models, that has these properties that we think are important. For those reasons, we think it is very important and would be very useful to us to have industry standards, whether we are using our own tools or using vendor tools, so that we could at least have easier interchange and compatibility of IP.



Leave a Reply


(Note: This name will be displayed publicly)