Tools are beginning to hit the market to develop power models, but getting accurate data isn’t easy; limiting the number of choices may be an option.
By Ed Sperling
The push to develop power models is growing at each node, and at 22nm it will be virtually impossible to proceed without one or more models for power.
Providing these kind of models is easier said than done, however. Creating an accurate power model requires accurate data from all the other pieces on a chip that potentially can affect the power. That includes how third-party IP is actually used, the interaction of multiple states, and even how software utilizes a processor.
Consider, for example, a virtualization layer that is added into a consumer device—an approach now under widespread consideration among device manufacturers because not all of the functions can take advantage of multiple cores. At the architectural level this makes perfect sense because virtualization simultaneously maximizes performance and utilization, which is a winning formula for efficiency. The problem is that using more cores also uses more energy, and the distribution of average use may vary greatly depending on applications or the interaction of applications. Running multiple games, for example, could drain a battery in a fraction of the time it would normally last for a voice call or playing music. And multitasking can greatly accelerate battery drain.
That’s only part of the issue, though. Higher utilization generates more heat in the form of dynamic and static leakage current. The more functions in use, the greater the dynamic current (or switching current). That can affect everything from signal integrity to the ability of memory to function properly to the overall lifespan of a device. And it can make modeling extremely difficult.
“This is a function of the operating system, or whatever software layer you’re using,” said Rob Aitken, an ARM fellow. “You determine the wake-up time and if it’s supposed to shut down different cores. But you can’t power it up right away because the IR drop would be too large, so you have to power up slowly. That means you have to model a speed limit on how quickly it wakes up.”
The challenges grow as more voltages are added for different CPUs. “If you’re operation a CPU at one voltage and the next at a different voltage you get an IR drop across the buses,” said Aitken.
New tools
Most of the large chipmakers have developed their own power models, which are specific to their particular designs. This isn’t something many chipmakers see as a core competency, however, which is why a number of EDA companies have put stakes into this market.
One of the most ambitious efforts comes from Apache Design Solutions, which has created a chip power-modeling tool. It’s an important start, but the accuracy depends on a lot of other factors beyond Apache’s control. That explains why Apache is working with the GSA to create some standards in the IP world.
Startup Parallel Engines is providing details about the available information on power, as well, for about 12,000 pieces of IP. But the accuracy of that information varies, in part, depending on how it is used.
“The power model of a chip needs to include accurate characterization of the multiple IPs that are included in the design,” said Dian Yang, Apache’s general manager. “But if those vendors supplying the IP do not give enough details about its power parameters and behavior, the resulting model will not be very accurate. Also, an accurate model needs to know things like the impedance of the die. But a simple power number based on an average estimation does not tell you that. You need a model that is based on transient analysis to address the dynamic behavior and the true impedance of the die.”
All three of the largest EDA vendors have worked to build power intent models, which help greatly on the functional verification side. Both Synopsys and Mentor Graphics back the Unified Power Format, while Cadence backs the Common Power Format. There has been work to bridge those two specifications by major standards organizations such as Si2 and Accellera. But no matter how much the EDA vendors and standards organizations insist that those differences are easy to bridge, that’s not the experience of chip companies.
“I have major issues with these standards,” said Sunil Malkani, director of IC design engineering for the GPS group at Broadcom. “The standards for power intent don’t work together, and sometimes the previous versions of a those standards don’t work with the current standard.”
He’s not alone in that viewpoint. John Busco, senior manager for design implementation at Nvidia, said the very existence of competing standards defeats the purpose of having them in the first place.
“I’m a little more forgiving when the standards don’t do everything you want them to do,” he said. “My pet peeve is dueling standards like CPF and UPF.”
While EDA vendors publicly don’t like to challenge their customers or potential customers, they say privately that more often it’s the fault of the IP and the way it’s being used than the power intent models themselves. “The user can capture the intended behavior of the design already, and if they add a few more lines of code involving the IP they can make sure the power intent is captured, too,” said one EDA insider.
Mixed models
The power intent specifications are particularly important in the verification stage, which remains the most time-consuming part of the design process. Those design intent specs are integrated with the power models, allowing engineers to map the power limits of the chip and the safe parameters for operation. But in the IP world, and even when it comes to reusing blocks and subsystems, there are not always power models available. At that point, the best that can be hoped for is that the existing models are power-aware.
“The biggest problem our customers are impacted by is legacy models,” said Prapanna Tiwari, CAE manager at Synopsys. “They were created when low power was not a concern, and the models don’t comprehend voltage. The second problem is that even if they want to create a power-aware model, they can’t do the entire power network in Verilog and hook it up to every power model that is being created.”
Limiting choices
Another major problem is the sheer number of choices that are available to designers and architects of these chips. The number of variables increases with each new process node, as well as the proximity effects of other components in an SoC, packaging, what software is being used and how it is being used, multiple cores, multiple states, multiple voltages and ultimately 3D stacking. Add to that multiple IP options and the effects on power models become overwhelming.
“We may well see standards for limits on the number of power models that are available,” said ARM’s Aitken. “If you look at the 1801 standard (UPF 2.0), there are certain things that are legal in it and certain things that aren’t. This could well be the direction.”
That doesn’t mean having a menu of choices will make SoC development any easier, but at least it would limit the number of variables that engineers have to wrestle every time they decide to integrate third-party IP or re-use their own IP. Still, there are a lot of changes to be made before even this step happens. As with all SoC engineering, nothing is guaranteed and not everything is predictable.
Leave a Reply