Predictions of doom may be overstated, but there are challenges looming in power reduction.
A common theme in semiconductor design circles today are the many power reduction techniques available to engineers to bring the power down in their devices. While power gating and other techniques are effective today, it does lead to questions about how long they will continue providing a benefit.
Anand Iyer, director of product marketing for Calypto, believes the answer really depends on the design itself. “It’s like asking how much weight loss is possible because it depends on where you start—the design characteristics (for the end market), as well as how it was designed. The RTL can be returned in a very ‘loose’ way or designed for one industry and then repurposed it to another. It may have all of the characteristics of the original one so there you may find there are more opportunities to save power, you may be doing more power reduction techniques.”
He added that while design characteristics are defined by power, area and timing, it’s really the market that decides these characteristics. The original design can be compared with the characteristics needed by the market and optimized accordingly.
When posed the question of whether there is a limit to power reduction techniques, the first thing that comes to mind for Mary Ann White, director of product marketing for the Galaxy Design Platform at Synopsys, is a definitive no. “I don’t think we’ll ever hit a limit. People adapt, so whenever there is something new—whether it is a new application for an end market or a new process technology like finFET—they adapt the low-power techniques based on that. With any new process node or with any new change, something always comes out because of it. Even power gating, which has been around in academics since the ’60s, when did it actually become reality? It was the mid- to late-’90s before it even started showing up on real silicon. I can’t say I’m keeping up academically with what low-power techniques are coming up in schools right now but, for sure, anything that will save power will definitely be useful in any application or any space.”
This talk about hitting the power wall has been going on for four or five years at least, if not more, noted Krishna Balachandran, product marketing director for low power at Cadence. “Also discussed is the death of Moore’s Law; Moore’s Law is dead because you cannot go on increasing the performance without heating up the chip and making it look like a nuclear reactor. There have been a lot of graphs explaining that. Subsequent to that they’ve started working on multicore architectures, exploiting a parallel GPU type of architecture, which can accelerate a lot of functions, and you have to program it in a completely different way. There’s been a lot of talk about all of that.”
To net it all out, he said, power-reduction techniques are things that can be done at a circuit level, a chip level and at an architecture level — giving multiple levels of control for how power is dealt with and managed. “Those techniques have been deployed by various customers and there’s still a lot of talk about deploying new techniques.”
One of these is the fact that software has become really important. “Looking at software and how software plays with the hardware is very deterministic in terms of how much power you can save. So from a software point of view/application point of view, how can you use less-expensive operations in terms of power? How can you stress parts of a chip or a system that are power hungry and how can you identify those early on? Those are becoming really important to the industry and there’s a lot of talk about optimizing power from a software application point of view. In fact, there’s even a standards body that has been formed (ACPI, Advanced Configuration and Power Interface), an open spec co-developed by a number of companies with HP, Intel, Microsoft, Toshiba, all saying that devices have to have power-saving modes; computer motherboard, BIOS, OS, everything should support that, and it has to be done from the hardware level, as well,” Balachandran explained.
So, have the power management techniques stopped? “The quick answer to that is no,” he said, “People are trying to find other ways of eking out that power savings.”
In that quest for optimal power reduction, Balachandran believes high-level synthesis holds promise. “If you begin early on, there is a realization that you can save a lot of power versus waiting for the RTL and then starting, which gives you only that much control to reduce the power. If you can describe your functionality at a high level like C or SystemC, then if you have a high-level synthesis tool that can work on it and use power as one of the cost functions in addition to timing and area, and can generate a number of different micro-architectures, you can plot a curve and pick your power/performance/area spot on that curve. It depends on your target application. You can use the high level synthesis to kind of guide you to choose what’s right for your application.”
The design-for-power approach addresses power holistically across design abstraction levels including system, architectural, netlist, and physical implementation in addition to technology and process advancements. New ways of reducing power continue to evolve such as the recent FinFET and FD-SOI technologies that address leakage. Preeti Gupta, director of RTL product management at ANSYS, observed that new RTL (register transfer level) power reduction techniques continue to be identified and implemented, and that RTL abstraction is now mainstream to address power – it can deliver the needed accuracy early in the flow when there can be a high impact. But, existing and new power reduction techniques cannot be used blindly. Gupta said there is a point of diminishing returns with these techniques, which means that at each abstraction level there must be an analysis-driven approach to measure not only power but its impact on other design parameters.
“The key is that power reduction has to be in the context of other design constraints,” she said. “Power cannot be optimized beyond it impacting the timing requirements or form factor. That really is the constraint.”
The graph below shows diminishing returns for power on a multimedia processor design. Apache says it identified about 300 RTL changes that could save power. The green line plots cumulative power savings while the red indicates cumulative area overhead against an increasing number of power-sorted RTL changes.
Gupta noted two key aspects that stand out. First, with predictable analysis at each abstraction level, you can identify the controlled number of changes that save the most power. You can then control power versus other design parameters as relevant to your application. Second, you can maximize power savings and minimize impact on other design parameters by eliminating design changes that have insignificant power impact. Almost half the identified RTL changes were saving minimal power although impacting other design parameters.
A way to go
Still, the majority of design teams have a way to go yet, according to Bernard Murphy, CTO at Atrenta, judging by what he sees in the most advanced mobile applications. “You can get pretty sophisticated with fine-grained DVFS and fine-grained clock gating. In mobile apps, a lot of what limits use of these techniques is not so much technology as not being sure about use-cases. ‘Can I really turn this clock down or off and not get into trouble in any reasonable use-case?’ Even where you do understand use cases, using these fine-grained techniques requires much more sophisticated verification, so a possible wall for mainstream design may just be the ROI on doing this level of verification.”
Also, better use of power starts with the applications, he asserted. “You can have much more impact on battery life tuning at this level than you can at the design level. Design can help, but only if the apps take advantage of the help.” As such, he expects to see more effect going into guidance, e.g. instrumenting designs to compute (and deliver back to software) how much power is being used in various components.
Looking in particular at the memory subsystem, there are several contributors to power consumption including the DRAM core voltage and the IO signaling, said Frank Ferro, senior director of product management at Rambus. The company has been focusing on creating high-speed I/O signaling while working within the current power budgets for standards based DDR and LPDDR.
He said mobile memories, for example, go into power down states when not active, which saves power. But what about applications where the memory is active most of the time? “To help reduce power in active mode, Rambus’s LPDDR3 PHY has what we call ‘R+’ mode. This mode gives SoC designers the option to configure the PHY signaling to minimize power consumption when paired with a compatible DRAM.” The approach utilizes low-voltage swing terminated logic for the data, mask and strobe pins, which is where most of the I/O switching activity occurs during normal operation. Rambus claims this can save up to 25% in the memory subsystem.
Of course, EDA providers are constantly coming up with new techniques on doing things, White reminded. For example, because of the rise of finFET, and dynamic power becoming a big issue, Synopsys’ latest IC Compiler release has specific optimizations just for finFET because of channel length derivatives. This was not an issue, say, five years ago, and is another way of shaving power, taking all of this into account.
“Maybe power optimization and doing more power techniques might take a little more design time or more compute resources or a little more know-how, but think about it: The multicore thing has come out and that’s sped up quite a bit. And that’s just from a straight optimization technique perspective. Yes, there is a startup cost for learning these things, but we’re finding that even saving 5% power here and there, people will do that because 5% is a lot for many people,” White added.
Clearly, there are many techniques that have been used to reduce power in SoC designs, many of which have relied on process shrinks (i.e. lower supply voltage) and EDA tools to reduce power consumption. “These techniques have been popular since the impact on the design effort/schedule is low and power reduction benefits can be good. Even with these power improvements, the potential for more significant power savings can be derived by examining the SoC architecture along with the applications they will support — so we still have a long way to go before we ‘hit the power wall,’” Ferro concluded.
For more discussion on power techniques, see this recent blog entry.