Optimizing IP For Power

Most commercial IP is a black box, but it still has to fit into the system power budget.

popularity

By Ed Sperling
As the amount of commercial IP in an SoC increases, the entire bill of materials is coming under increasing scrutiny because of a new concern—power. Commercial IP, after all, is largely a collection of black-box solutions to speed up the time it takes to bring a chip to market, and frequently to improve the quality, but the cumulative impact on the system power budget has never been fully charted.

This is particularly relevant in complex and densely packed SoCs at advanced nodes, and it’s not always clear how to optimize for power. In some cases, the best solution is an educated guess about how IP is likely to be used, and then it has to be characterized around those usage models.

“The problem is that it’s not just the IP that you have to characterize,” said Erich Marschner, product marketing manager at Mentor Graphics. “It’s also the system architecture, the software architecture and the protocols. Even with something as simple as shutoff, to shut something off requires you to add logic. So to conserve power, you have to predict how long it will be down. But if you shut it off and it’s brought up too soon, that will use more power. One of the biggest issues is usage patterns or system scenarios.”

Those usage patterns can have huge swings, too, which can dramatically affect how IP is used and how it needs to be characterized. Rather than fixed numbers, these tend to fall into the realm of distributions and probabilities. A person using a smart phone for voice calls and e-mail has a vastly different profile than someone who plays games and watches streaming video.

“This is very fuzzy business,” said Marschner. “It’s all based on assumptions, but if your assumptions are wrong then it might not be optimized.”

It’s also based on more than just the IP itself. While usage models are critical to effectively characterizing IP, there are other factors to consider. One involves the voltage at which IP will be operating. There has been a big push by chipmakers to lower the overall voltage, which in the case of complex SoCs is actually multiple voltages. There’s no single formula for making this work, however, because it depends on everything from noise limits of transistors, proximity issues, and even what kind of transistor is used. A finFET, for example, can run at lower voltage simply because there is less leakage. But what does that mean for an entire subsystem that includes finFETs and commercial IP that wasn’t designed to work with 3D transistors or at different operating voltages?

“If you can reduce the voltage by 20%, you get a 45% power improvement,” said Chris Rowen, chief technology officer at Tensilica. “Some things can be taken down to 0.6 or 0.7 volts if you sacrifice some operating frequency.”

But what that has on IP isn’t so clear. There are papers on this stuff, and some work has been done in research laboratories or with test chips. But how that applies to commercial IP is unknown because not all IP is even characterized for power, and not all power characterization is the same.

“The challenge for people driving optimization of platforms is to make task migration to more specific power scenarios as transparent as possible,” said Rowen. “It starts with libraries and partitioning of the task. So if you’re dealing with graphics, imaging and audio, you can draw a line between the high-level application and the parts doing the heavy lifting. But that also requires collaboration between the operating system, the application and the hardware platform to expose the applications.”

Dynamic vs. leakage power
What’s also apparent at advanced nodes, as density increases, is the tight relationship between dynamic and leakage power. Both can create thermal issues, affect signal integrity, and drain a battery. But not all IP is characterized for both. If it is characterized for dynamic power, which is the more likely of the two, it requires some guesswork and math to figure out the leakage power.

“For us, dynamic power is the starting point,” said Mary Ann White, director of Galaxy Implementation Platform Marketing at Synopsys. “Then you can start looking at things like multi-bit registers and data paths to try to improve things that are tiled and regular. There are ways of doing dynamic power savings for that. For IP, you would follow the same methodology for the standard implementation of a block. That block can then be characterized to include the power and timing information. If it’s a black box, it has no power or timing information. But most of the time, if you want to do IP re-usability, instead of doing a block with all the standard cells in it you can do an abstract model with the timing and power characterization. At the very least, Liberty has all of these attributes that you can specify the power characterization.”

She said that from the design engineer’s perspective, it’s relatively straightforward to calculate the leakage power from the dynamic power. “It’s up to us—the EDA vendors—to optimize and use that. What bells and switches does the user turn on? What’s my clock tree going to look like? Am I going to use a mesh? How do I want to optimize power versus performance?”

Optimization required at all levels
The rule of thumb is that changes made at the architectural level are more effective in saving power at all levels than at later stages. What’s also clear is that designs are becoming so complex that changes are required across the design flow and even the best-conceived plans will go awry.

That explains why so much work is still being done at the register-transfer level, where power is easier to actually measure, rather than at the architectural modeling stage where the impact would be greatest. But even at RTL it’s not that simple.

“We’ve found that to really do RTL power optimization well, where you have power reduction, automated improvement, you need accurate estimation and verification of power intent—whether it’s UPF or CPF—as well as estimation and reduction,” said Mike Gianfagna, vice president of corporate marketing at Atrenta. “And you need all three or it doesn’t work. We’ve seen a lot of applications where people try to insert power management into IP, but there are so many updates late in the process that when you try to re-use that IP you run into problems.”

Gianfagna said a big challenge is providing automation to link changes back to RTL. That appears to be a consistent theme across the entire design flow, as well. What gets tweaked in one area needs to be reflected in another.

He’s not alone in that assessment. Thomas Bollaert, senior director of applications engineering at Calypto, said there are three considerations for optimizing IP for power.

“The first is that IP is supposed to be done, so optimizing it in the first place is somewhat radical,” Bollaert said. “ The second, if you assume that it can be optimized, is that IP exists as RTL without a higher level of representation and that you need to automate it the power optimization because manually making changes is like Mission Impossible. The third consideration is that there is only so much you can do with the existing RTL, so if you really want to optimize it you have to convert it to C or SystemC. That gives you a wider ability to optimizes for a technology node, a power budget and a performance goal.”

But it’s not only the IP that has to be optimized for power. It’s also the tools used for the optimization in the first place.

“One of the problems is that the models we use for standards don’t deal with the power state,” said Mentor’s Marschner. “We need to improve the libraries for the power models and make them available in a way that feeds back into the flows. But it will take time to develop models that include the power consumption of blocks. We’re just starting to sketch out what’s next in UPF 1801. One issue under consideration is the system-level power intent and how you do power budgeting for C models. What happens if power is a function of X, and X is flexible? And if you make tradeoffs, how do you make sure those are appropriate for power?”

Conclusion
What’s needed is a way of developing system-level power models that can identify problems at a higher level of abstraction, but which also can reflect changes in power at all levels of the design. Within that scenario, the various building blocks—IP, transistors of all types and software—need to be characterized independently and for how they interact with each other against a number of possible usage models.

Because this is a relatively new requirement—and still not completely worked through by the supply chain—new standards will be required and others will have to be extended. Even then, it’s uncertain how all of the pieces will work. But like all good engineering, one piece can be built on another, improved from there, and reassembled in multiple ways with more solid information developed over time.

It’s clear that none of this stuff will happen overnight. But at least a lot of people are thinking about it seriously right now, and that’s a good first step.



Leave a Reply


(Note: This name will be displayed publicly)