Power Modeling: Use Cases Need to be Clearly Defined

Early architectural decisions rely on models of the system that are accurate enough to make tradeoffs but the tools in the area are still hard to come by.

popularity

By Ann Steffora Mutschler

Low-Power/High-Performance Engineering sat down to discuss power modeling during the Design Automation Conference with Vic Kulkarni, senior vice president and general manager at Apache Design; Paul Martin, design enablement and alliances manager at ARM; Sylvan Kaiser, CTO at Docea Power; and Frank Schirrmeister, group director, product marketing for system development in the system and software realization group at Cadence. What follows are excerpts of that conversation.

LPHP: When it comes to power modeling, what is the problem and what do we need to do to solve it?

Kulkarni: How customers are looking at it is, power modeling right now is a big challenge on integration of IP. How do you get disparate IPs on an SoC? What is the handoff? What is the power model requirement in that? Some of that work we’ve been doing on RTL Power Model technology at RTL, at least, along with how to push that downstream in the design flow so there is a consistent power integrity or so-called power budgeting flow—dynamic voltage drop, the static, all the way to package. What’s the way to encapsulate IP-based power first, which can work throughout this flow and then be consistent in terms of this handoff?

Martin: Over recent years the focus has always been on trying to control power through process and through clever stuff at RTL level. What we’re beginning to understand is what the real benefits are at the system level. Increasingly it’s becoming more and more difficult to understand the different tradeoffs at the system level because of the complexity in terms of the sorts of applications that are being run, but also in terms of the architectures that are being implemented to support those applications. There’s a lot of inherent functionality now that makes understanding those tradeoffs more difficult to predict. So for us, and certainly our partners, we are starting to have to pay a lot more attention to power budgets and understanding the power implications of different architectures very early in the design cycle.

Kaiser: We look at power and thermal at the system level and there are more and more issues coupling power and thermal and for the same reason, the complexity at the system level. Choices are made: Foundries and system on chip at the platform level and the ability to run full-scale application on the system very early, but also during the implementation phases to track the power budget during the implementation phase.

Schirrmeister:  What we have in power is the classic architecture problem—the amount of information you need very early on to make your decisions requires more detail than you typically have at that time. That’s something we are trying to address essentially through refinement so it’s easy to annotate a TLM model of something of, let’s say, a virtual platform to see how the software switches through the different states. It’s much more difficult to get the right data, so the annotation is not the point. Getting the right data is the issue. We are working from a system development perspective. We are working across the different levels so you can do it at the TLM level, then refine as soon as you have more data. That’s why at the RT level you have Palladium for emulation, you have power shut off, you have dynamic power analysis. It goes through the flow from here. You always need to close the loop as early as possible so our objective, and what customers ask us to do, is to get these loops as short as possible so that you can refine.

LPHP: How will modeling the power help and where is the industry at today with that?

Kulkarni: In terms of the various levels of abstraction, we are trying to make life easier for the designers; there is still a big shortage of how to represent the physical effects up in the design flow. What’s the best way to capture it? That’s where we are spending a lot of time with our customers trying to create a higher level of abstraction all the time, but we always find it is very difficult in terms of things like asynchronous resistance post place and route and clock tree models. Clock trees are all over the place. How do we capture that and give that band of accuracy early in the design flow so that the models at RTL, TLM or even higher can represent what will happen downstream? Connecting the dots is where we are spending a lot of energy. R-C die, short interconnects, long interconnects, wire load models are failing these days. They were optimized for timing but not necessarily for power, so how do you represent the C2F and carry it throughout the design flow. And of course the 1801 UPF/CPF can really help in terms of getting that power intent. Maybe there is a way to drive the models from that side, as well. There is already a good coalition in that. We are part of many of these working groups and several companies are trying to encapsulate the ones that are UPF/CPF driven.

Martin: We kind of make some assumptions there that models are the solution. They certainly have their place, but when you really look at the design flow and where architects are starting from, they don’t always have all of the pieces available to them. They may have some models, they may have some RTL, they may have just some spreadsheets and very, very high-level information about the sort of thing they’re going to design. So it’s not the case that you have a model of your system at that stage of your design. You need some mechanism that actually moves you forward from where guys are today, which is computing potential power dissipation on a block-by-block basis or a use-case by use-case basis just using simple spreadsheets. The underlying complexity and the behaviors of the hardware are much more difficult to predict in that static way. You need some kind of dynamic analysis for sure as an assumption for these models. But you may not have all of this. The other issue is then, how do you take some view of the power associated with the process or a particular implementation and actually bring that into not only the exploration environment but what you annotate in that environment and what makes sense? It’s actually quite a complex problem.

Schirrmeister: We need to be very precise about the use models. When it comes to predicting the overall power envelope, you have to be very careful with every model and you need to read the instruction manual of what you can actually do with a model. If you try to do things with the model that it isn’t meant to support, you will shoot yourself in the foot and it will be painful. Where models come in very handy is when it comes to embedded software development. When it comes to developing the software that is doing the power control then a model representing all the states I can go through with every peripheral, with every processor, executing the software and driving with the driver development or the software I’m now executing, driving the system through in different states—that is very useful. And if you annotate the data, it becomes part of the validation: do I actually see the power consumption change? Can I use the same model to make sure that I’ll never burn up or that I have the right package from a thermal perspective? Probably not a good idea. Those are two use models that we need to separate.

Martin: Even there, you need to be very specific about what you are modeling because you can take a virtual prototype or platform and use models, but if you are trying to model the effect of different power states then you also have to take into account the physical things like voltage domains. So how do you bring that into the model?

LPHP: How does this fit in with the design flow?

Schirrmeister: Those are becoming part of the verification. Quite some time ago, we demonstrated the use of virtual platforms for early software driver development and voltage domains were modeled, but it wasn’t the predictor of absolute power. It was really validation that the software is actually switching something while you are developing it. The other piece is how to you deal dynamically with power—how to you deal with power shut off. Those are things where in the past you had to wait for all the switching data from the RT simulation. We have come quite far now in getting at least more test data faster by putting this into the hardware accelerated world.

Martin: If you go back to my poor, stressed architect who is doing this from scratch, the play you described there requires a lot of investment to get to, a lot of resources just to do that very early analysis.

LPHP: How are we dealing with power dynamically today?

Martin: It depends where you are in that design flow. If you were to draw your process from architectural exploration through to tapeout, you’ll see that you have to analyze power at different stages of that flow using different models and different techniques with different levels of accuracy. But also you’re trading that for turnaround time. Basically, as your design gets closer to tapeout you’re willing to spend more time in verification for things like performance and power, but equally you have to invest a lot more to get those numbers back. Very early in the design cycle, you want to try and iterate things very quickly but you do need a good level of accuracy. There’s this idea that it could be very, very coarse, but it’s completely useless if it puts you in the wrong ballpark. What can make or break a chip is actually which package you end up with because that’s a huge component of the cost. If you’re right on the edge of your envelope, that’s a big decision. What architects tend to do is they try to see if they have enough headroom—and see if they are at least working within the constraints they’ve been given. But it still needs to be representative enough to be able to make that tough decision. It has to be based on some sort of reality. Typically what people are doing now is spreadsheets, but we’d like to get them to use more dynamic analysis. I think we’re in very early commercial stages with a solution.



Leave a Reply


(Note: This name will be displayed publicly)