Where is power being lost? Where is the power budget being squandered? Can more be done from an architectural standpoint?
For many, if not most designs today, power is everything. Determining where power is being lost is critical to making sure the design is optimized. So where to begin?
To this end, it is useful to go back to the fundamentals of what power is and what power consumption is, noted Paul Traynar, software architect at ANSYS/Apache. “Power is proportional to capacitance x voltage squared and frequency. When you’re looking at the whole idea of power loss or where power is being wasted, you’re looking at those parameters. You can’t do much about voltage because that’s largely determined by the process, unless you’re doing power domain switching, but generally you can’t do much about the voltage. You’re really trying to reduce that capacitance and you’re trying to reduce how often those capacitances are being toggled and at what frequency they are being toggled. To a large extent, that’s where you have to focus your effort if you’re looking at reducing power.”
Bernard Murphy, CTO of Atrenta, explained further that some power is lost in leakage, some is lost in design inefficiency and some through thermodynamics. “FinFETs and FD-SOI should reduce the first, although these are for high-end applications. For everyone else, better planning of Vt mixes and static voltage islands could help reduce waste through leakage. Design efficiency for power is a tradeoff – how much can you squeeze or do you want to squeeze?”
He stressed that power saving usually carries added complexity, which may not be worth it versus schedule or verification complexity. The second law of thermodynamics accounts for the rest of power loss– conversion of some percentage of dynamic power to heat. “You can reduce this somewhat and you can spread it out more evenly (which is what thermal analysis is all about), but you can never reduce it to zero.”
There’s another aspect to this, as well, that needs to be considered within a larger context associated with system-level power and energy, Traynar pointed out. “Ultimately what we’re trying to do is accomplish something with a design, and one of the things that is important there is how much energy is involved with accomplishing whatever it is you’re trying to do. Another aspect of this is that even though you can reduce power consumption by reducing that capacitance and that toggle frequency, the real problem is how long it’s going to take you to accomplish the task and whatever processing you’re doing because ultimately it’s energy that you’re really trying to conserve. You’re trying to make your battery last longer.”
Of course, added Anand Iyer, director of product marketing for Calypto, architects want to design their part to the power budget. It doesn’t matter how it’s getting implemented. “We look at the problem slightly differently. We look where the opportunities for saving power exist the most. That gives some indication of where the power is being lost. Today, the architect’s focus is that he designs the part, which is conforming to all the PPA budgets, but RTL designers are not really educated on power per se, mainly because power is a global phenomenon so they focus mostly on their area and performance. As a result, they leave a lot of power on the table at RTL.”
He pointed to the well-known power curve that shows total power is going down. “The most power saved at the RTL phase is 80%, with 20% saved at the back end. If you look at that and then break it down into leakage and dynamic power, we can see that at the microarchitecture level, you can reduce power drastically on both leakage and dynamic. But the dynamic power savings after the RTL is fixed and there is not much that you can get — maybe 10%. But on the leakage side, you can get back leakage through various types of optimizations: the multi Vt, the final leakage based on the timing. With those kinds of things you can probably recover another 50% of the leakage. For the dynamic power, there are only a few optimizations available downstream. Today in advanced designs, the multibit resistor is being used a lot. At the same time, most of the power savings is there at the RTL level.”
Iyer believes there are a couple of things preventing that power savings. He refers to them as estimation bias and optimization bias. “Let’s say you can get 100% power savings at the RTL but today the RTL designers are realizing only 60% out of that 100%; 20% goes to the optimization bias where the power is lost because they don’t have tools that are capable of actually getting back that power. Just doing clock gating doesn’t cut it. The other 20% is the estimation bias, which is that the power estimation accuracy at the RTL is very coarse. If you’re estimating power savings and the actual power savings is lower than that, that again causes that power not to be recovered. Because of the estimation bias, designers will implement certain low power techniques at the RTL, but then they have to implement the design, i.e., go through synthesis, go through clock tree, then they estimate the power and see whether that matches the power budget that they had. Then they have to go all the way back to RTL to fix it if there is an issue. This loop is pretty long today and there are no real estimation tools or exploration tools that can close that gap.”
The answer is tools, of course, that allow for maximum power optimization looking at the sequential boundary. Estimation tools are also needed that can close in on that accuracy difference from the downstream to the RTL.
Architecture is key
Krishna Balachandran, product marketing director for low power at Cadence, agreed there is more power lost at the architectural level than at silicon because in silicon there are so many controls, so many techniques that are used at the process level that are well understood. “The foundries are doing work on it. Take finFET for example. That’s a big step in improving the leakage because of the nature of the device. They pretty much brought the spiraling-out-of-control leakage into check when they introduced that. So [the focus] went back to dynamic power control, which is a design technique. If you look at a lot of the techniques that are used at either the circuit design level inside the IPs, or if you look at a chip level, then the techniques that have been in wide deployment like power shut down and low Vdd standby have been adopted in a whole number of designs. Those techniques are well understood, well done in terms of how they are implemented. If people are not using some of the techniques, then they might be giving up something, but the problem and solutions are well understood.”
Given the level of understanding of power at the silicon level, at the other end of things, the architecture level, it’s still very much an evolving area, he said. “First of all, there is no standard way to measure it at the architectural level, see how it correlates with downstream silicon — that part has not been solved. There are proxies for that and you can do some calibration based on running actual designs through and the rest of it is more of a black art at this point.”
Further, in terms of comparing the opportunity for power savings between architecture and silicon, Arvind Narayanan, product marketing manager for the Place and Route Division at Mentor Graphics, pointed out that the level of abstraction of the design has much to do with it. “If you go from ESL to RTL to gate level and then to the silicon, typically you have a lot more opportunity to save power and the magnitude is also much larger at the architectural level and this keeps going down. So, as you go to the RTL, as an example, if you’re able to save let’s say 10X by making an architectural change to your power consumption, that translates to maybe 2X when you go to the RTL level. And then as you go to the gate level and silicon, you’re looking at maybe 30% or 40% savings in power. This is the trend that we usually see. The best bang for the buck in terms of saving power comes from the architectural level, and it also makes it easier for RTL and gate-level implementation to realize what is being predicted up front. To begin with, if the architecture of the design is not good in terms of power, that directly cascades and translates to worse power numbers as you go to implementation.”
Having said this, a comparison between the architectural level and the gate level shows that the gate level has a lot more impact as the design goes through the physical implementation in terms of power savings. “You do tend to balance power with other design metrics like clock speed and area. It’s a balancing act to try and figure out which one is more critical than the other and unless you meet your performance goals, you’re not going to save power,” he added.
Don’t squander the power budget
When it comes to squandering the power budget, Murphy said the biggest culprits are software and users. “The greatest percentage of power, or more exactly energy, which can be saved (by far), is in better power awareness at the OS and applications levels. There is a story about a certain phone’s contacts app, which in some versions of the OS pinged frequently to make sure the contacts list was as up-to-date as possible. Fixing that one feature (so contacts only update on demand) increased battery life between charges by a significant factor — something like double or better. Then there’s the user problem – unaware that you have to actively close an app to stop it running. Over time I have seen 80+ apps open on a phone, with the owner swearing they are going to toss it because it can’t keep a charge. Provide a little more user feedback on power (charge remaining, apps open and how much they are using) and this problem will go away. Samsung is doing some of this now on the latest Galaxy phones.”
More can be done from an architectural standpoint to deal with these issues, to be sure, but most of that depends on architect ingenuity and less on design tools or methods, Murphy offered. “[ARM’s] big.little is an example. You can run low performance applications on a low frequency, low voltage processor and only turn on a high performance, high voltage processor when you absolutely need it.”
There are lots of other tricks like this, including:
—Run fast then stop, which can be applied in some cases to processes that may use less energy if run quickly to complete some task, then shut down for longer idle periods.
—Cycling memories between on and off-states. Leakage depends on temperature, so if the memory can be allowed time to cool by shutting down, leakage in the on-state can be reduced.
—In theory, use of asynchronous logic which should run at much lower power than synchronous logic.
However, he noted that these are very specialized techniques, usually not worth the trouble unless the use cases are fully understood. “Most architectural power optimization has to work with a fairly wide spectrum of possible use cases, which comes down to modeling power extrapolated from a close earlier design in a virtual or TLM model, or spreadsheet guesstimation for from-scratch designs, and using traditional methods like power islands and clock gating/de-rating.”
At the end of the day, the design engineer must consider a number of things when thinking about power, Traynar said. “Clearly reducing power and reducing average power is good because high power consumption has a detrimental effect on power rails and electromigration — so it’s a good idea to try and reduce that but ultimately it’s the energy you’re consuming and trying to make your battery last longer. Going back to the capacitance, the frequency and the toggling, the very simplest thing there is, in terms of toggling, how can you reduce the clock frequency or how can you reduce the number of clocks toggles you’ve got?”
On the other hand, he said, how can you reduce your data toggling at the same time? If you can look at those two different aspects of your design, then you’re going to be reducing toggling and you’re also going to be reducing the amount of toggling that’s happening on capacitance in your design, as well. Those are the areas where you need to start focusing your efforts to try to reduce power consumption, he concluded.
Leave a Reply