Does Power Analysis Need To Be Accurate?

Debate rages over what is needed for analysis, estimation and optimization.

popularity

The mere mention of accuracy in power analysis and optimization today can trigger a contentious discussion, even among typically reserved engineers.

What is needed and where? Which tools are truly as accurate as claimed? And how much accuracy is actually needed for power analysis, estimation, and optimization?

First of all, the accuracy required really depends on what the engineering team wants to do with the data that is generated. “Different teams may have widely differing accuracy requirements,” observed Alan Gibbons, power architect at Synopsys.

For example, software development teams typically care much more about internal than external comparisons, he said. “They need to see that the software is becoming more energy efficient as it is developed. In many cases, they care little about the absolute power numbers, but they do care that the energy efficiency is improving. Absolute power analysis can be addressed later in the flow.”

For hardware teams, the problem is a little different. “They may be working at the architecture level and exploring various hardware architectures for energy efficiency where, again, accuracy requirements are a little more relaxed. Or they may be further down the flow and wanting to do some software-driven SoC power estimation, where we really start to care about the absolute power consumption. For the latter case, accuracy is paramount,” Gibbons explained.

Given that there are many different design teams interested in power/energy operating at different points in the design flow, this requires varying levels of accuracy. “In essence, power analysis needs to be as accurate as it needs to be for whatever use model is being considered, and so the corresponding power models need to be able to support this,” he said.

How good is the model?
Power analysis is an interesting challenge because ideally there should be a great model of the chip, as well as a perfect model of how software is using the chip.

“The software guy says, ‘To have a perfect model of how my software is going to use the chip, I have to have a perfect model of not just the chip, but the whole system that things incorporate into,'” said Drew Wingard, CTO of Sonics, “Eventually, you can’t do anything until you have everything. And, of course, that’s not good from a design perspective. So the question is, what shortcuts can we take? What abstractions can we work with?”

There are no clear answers here. “There are some people who say to just put it all under the emulator and everything is going to be okay,” Wingard noted. “Others say to build a virtual platform model, and everything is going to be okay. What’s clear is that there are some very simple rules of thumb, which people have used for a long time. We know they are horribly inaccurate, but they allow people to make relative choices, and that next level of precision down isn’t clear right now.”

This inevitably leads to a discussion about accuracy, and whether engineering teams want the highest level of accuracy they can get. Wingard doesn’t believe they do.

“What the architect wants is enough accuracy to make good decisions,” he said. “The purpose of getting the data is to decide between choice A and B, or determine, most importantly, whether the architecture is dead on arrival because it can’t possibly be implemented in such a fashion that’s acceptable from an energy or power perspective. What the implementation team wants is enough accuracy to ensure that they are safe. Safe might be that in the required modes of operation, I fit within the thermal budget of my package. Or, safe might be the battery life of this one is better than the last one. Neither of those require perfect accuracy. The second one requires more accuracy than the first one does. But in many ways this is like the ESL issue, which is, ‘How do we get enough models that we get enough accuracy? How do you get a positive ROI on the development of the models?’ That’s where we are struggling right now to find the right answers.”

That doesn’t mean there aren’t strong opinions about what is the best route, however. Krishna Balachandran, product management director for low power at Cadence, stressed there is a demand for accuracy in the engineering community when it comes to power analysis, estimation, and optimization. “If the numbers are way off then they have no way of projecting what’s really going to happen, especially if it’s a new design,” Balachandran said. “If it’s a derivative design, the problem is less severe because they already have some idea from the derivative design that has seen silicon. They know what the power number is likely to be, and there are a few blocks that are going to be changed.”

At the same time, sophisticated engineers know how to estimate that to a reasonable degree of accuracy, he said. “Let’s say there are a few blocks that change, and they are off by 50%, but the total impact on the overall power may be less than 20%. So they are still okay with that for a derivative design. But what if a design is all new? Let’s say they are entering a new market. A company that has been designing in the mobile space wants to enter the wearable space and they are doing something different. There is an analog team, and there is a digital team. The analog team, just by virtue of being analog engineers, know exactly what they’re going to get at the end of the day. But the digital team is tasked with finding out the power number on their side, and they really don’t have a good handle on that.”

Quantitative vs. qualitative metrics
Part of the reason for the debate on accuracy is that end goals are not always clearly defined.

“Whether or not quantitative accuracy is important depends on what you are trying to achieve,” said , CEO of Teklatech. “If you want to know whether you meet your spec of say 300mW power consumption for a chip in a given mode, of course you need to be quantitatively accurate. On the other hand, if you do optimization, you want to optimize as much as you can, so it’s more a question of being qualitatively accurate—knowing that you are moving in the right direction and that what you are doing has a positive impact. As long as it has a positive impact, you want to do it as much as you can. So whether you save 1mW or you save 10mW or 50mW, it doesn’t really matter. You just want to save as much as possible. There, in the optimization space, qualitative accuracy is more important than quantitative accuracy.”

That changes for sign-off, however. “Quantitative accuracy is very important because you want to give the designer an answer,” Bjerregaard said. “You want to tell them, ‘This is how much it is and this is how things look.'”

Correlation through the flow is also important. So all of the stimulus information from the early RTL analysis needs to be carried directly forward to implementation, said Rod Metcalfe, product management group director at Cadence. “We use all of that information to ensure that we have good correlation and good accuracy during implementation. To be able to re-use this information during the flow is absolutely critical. Otherwise you can’t have any form of accuracy.”

Design changes make a difference
The changes being made to the design play into the accuracy discussion as well, said Preeti Gupta, director of RTL product management at Ansys. “If you are making large-scale changes, such as shutting off an entire block clocking for 10,000 flops, and you get that right within 10% or 15% at a higher level of abstraction, that’s still good enough if you are predicting it within 10% to 15% of what the actual numbers are. Chances are, you will save power with those kinds of transformations.”

That works at a higher level of abstraction. As the design moves down to lower levels of abstraction, if there is Vt swapping occurring, or changing from one threshold voltage to another in order to optimize power in the context of timing, more accuracy is needed because the changes are more surgical in nature, she said. “They have a lesser impact on the overall power, but if you are changing half of the flops within your design with different Vts, you will save a lot of power cumulatively. But on a path-by-path basis, you have to be accurate. So it really depends on the scale of the changes you are making.”

Meanwhile, Abishek Ranjan, director of engineering at Mentor Graphics believes the questions about accuracy are best answered by the designers themselves. “The tool companies can make all sorts of claims because for a set of 10 designs we get within 1%, for a set of another 10 designs we could be 20% off. It’s a design-to-design decision.”

Ranjan said that accuracy within 15% at the RTL level is generally good enough. “Even with implementation tools, whether you take from one company or the other company, they implement the same chip with the same specifications and your power will be 10% to 15% different. So the two implementation tools cannot track each other. It’s very difficult for an RTL tool to be any more accurate because that’s your least point of accuracy.”

Interestingly, Ranjan explained that functional changes are easy to quantify because they will stay the same throughout the design. “At the RTL, when you estimate 20% reduction in the clock toggle, that reduction is going to stay throughout. If somebody can assess that at the RTL level with reasonable accuracy, that is good enough. But if you really want the multiplexer shifting, or the splitting of datapath operator to be very accurately estimated at RTL, that is probably a little bit of a longer shot, and you are tasking the RTL tools too much. Even so, the correlation and accuracy is definitely important, and today’s tools have gotten very mature and stable at doing that.”

Just remember, he cautioned, if you get too fascinated with this 1% or 2% accuracy, you will be delaying your power decisions and the option of power technology.”

Yet another aspect of accuracy is related to power state residency, “Understanding how long a design spends in a particular power state is as important as the power consumption of the design in that state,” Synopsys’ Gibbons said. “It is the combination of the two that gives us a more accurate understanding of the energy profile of our design. To deliver accuracy here, we really need to be performing software-driven power estimation and analysis using real or representative workloads in simulation, which gives us an accurate view on power state residency for a specific scenario.”

Thinking it through
When it comes down to it, you really have to know what you are looking for and what you are trying to find out, said Ellie Burns, senior product manager at Mentor Graphics. “Are you trying to find out if you are in a power budget to the exact number? Well, then maybe you need some accuracy. But if you are trying to make a change where you need the relative number, you need to look at what it is you are trying to do. If you eliminate activity in the design, no matter what tool you use, you’ve eliminated switching activity in the design. That will reduce power.”

Related Stories
Mixed-Signal Low Power Design
The problem with mixed-signal power estimation.
Analyzing The Integrity Of Power
Making sure the power grid is strong enough to sustain the power delivery.
Why Power Modeling Is So Difficult
Demand is increasing for consistency in power modeling, but it has taken far longer than anyone would have guessed.



1 comments

Pitch Monk says:

I think Abhishek is correct based on my design experience. Accuracy requirement largely depend on the system. If the system is battery operated, we want to squeeze every mW from the system. If it is tethered, the accuracy requirement is not high. It also depends on the total power consumption. If the power is in mW range, the accuracy is very high, where as if the power is in W, the accuracy requirement may not be that high. Ultimately, designers like to build margins for all the parameters and power is no exception. Then it boils down to the tradeoff w.r.t schedule

Leave a Reply


(Note: This name will be displayed publicly)