Power Model Complexity Grows

Multifaceted issues now reach well beyond a spreadsheet’s capabilities, but methodology and tools remain inconsistent.

popularity

By Ed Sperling
The number of factors required for an effective power model has far surpassed the capabilities of even the most detailed spreadsheet at 45nm and beyond. It has now entered the realm of complex databases and architectural tradeoffs, and those tradeoffs will become even more complex as 3D stacking takes root over the next 24 months.

The idea of modeling power is hardly new, but you wouldn’t know that comparing the current iterations side-by-side with the old methods. While there is still a need to understand worst-case scenarios to protect signal integrity, not to mention the other components on a chip, there is far more that needs to be considered in power modeling at advanced nodes than in the past.

“There are two issues that need to be solved,” said Ran Avinun, marketing group director for system design and verification at Cadence. “One is how to do this. The second is who owns the format. The methodology hasn’t been solved yet. When you tell the customer that we’ll compare our numbers with your back-end flow and libraries, that’s not good enough. It’s going to give me the data about the SoC or the ASIC, but that’s not enough. When customers look at power it’s what they measure in the lab. When they test the device it’s in a real environment with real software and the package. Today they don’t have a good way to test. It’s done with software and vectors, but it’s not really reflecting what the user will get.”

The second issue is understanding what parts of the system actually consume power. “Customers don’t know how to partition the power consumption of the ASIC vs. the overall system, so what they measure is the overall power. They don’t know how to partition those components and there is no good way to model that. We’re looking at the ASIC and die level, but they need to model the whole system,” Avinun said.

Moreover, for the system-level numbers to be used in a meaningful way at the architectural level they have to be relatively accurate. The shrinkage of components has made everything more susceptible to the effects of power, mechanical and thermal stress, electromagnetic interference, electromigration and noise (see fig. 1). Modeling power is now required. But even the simplest ideas such as power supplies are no longer simple.

Fig. 1. Source: Apache Design Solutions

“In the past we had two power supplies, one for the digital and one for the analog,” said Cornelia Golovanov, an EMI expert at LSI. “Now we have three or four analog power supplies in a small area, which makes the supplies very inductive. These are not well analyzed in the context of the whole system.”

Like anything else at advanced nodes, without adequate planning power supplies can be corrupted. Even maximum power, which used to be calculated in a worst-case scenario fashion once RTL was already synthesized, has become incredibly complex with multiple power islands, multiple modes, multiple cores and multiple voltages.

“With a power model ideally you want to cover all scenarios and all vectors,” Golovanov said. “But some of these have really long simulation points. It can take weeks at the end of the design cycle, and then you have to factor in the chip in the package on the PCB. There is no time for that.”

What’s in the model?
That’s where models fit in. Much attention has been paid to the different library approaches for defining power intent with the Unified Power Format (UPF) and the Common Power Format (CPF). The power model is a level above that, defining the power delivery network, signal integrity analysis, electromagnetic interference and compatibility (EMI/EMC) and the thermal effects of power, primarily in the form of dynamic and static leakage.

“In the past power models were simplistic in nature and deemed sufficient for the needs at that time,” said Aveek Sarkar, vice president of product engineering and support at Apache Design Solutions. “You could provide a single current and single capacitance and the result was your best guesswork. That all began to change in 2006 when we moved to 65nm. The package design could no longer be off the shelf. It’s now a competitive difference for companies and it can determine the price and performance of a system. Hence, an accurate model that represents the actual activity and parasitic profile of the chip is important off which you can base package and PCB decisions.”

Packaging has other issues, though. While a chip consumes current, the package and the PCB can act as an antenna for chip-generated noise, which results in EMI. It’s becoming necessary to extract an S-parameter model (scattered parameter) model for the package. Once that model is constructed, then a full system-level AC, DC, and time-domain analysis, then a full analysis can proceed using the power models of the chip, said Sarkar.

“Right now 40nm is mainstream,” he said. “At 22nm and 28nm electromigration gets very complicated. Since electromigration and leakage current can change very drastically with a temperature increase, we have to model the thermal profile of the chip–especially for a stacked die configuration.”

But there’s also a point where models can become useless. Looking at everything from a very high abstraction level is excellent for layout and functionality, but it can insert some very large errors into power models—sometimes as high as 300%, according to Cadence’s Avinun.

And there needs to be more consistency among models to make them useful. Frank Schirrmeister, director of product marketing for system level solutions at Synopsys, said the standards don’t yet exist because this is all so new.

“In TSMC’s reference flow 11, they characterize their libraries for low power and then make this all accessible for TLM 2.0 modeling,” Schirrmeister said. “Then you should be able to add up meaningful power numbers, even at the system level. Today this is all in the early stages. The different vendors have different formats. At some point it needs standardization.”

Standards needed
All of the major foundries are working on these kinds of models. In addition, Apache is working with the GSA on models for power. Those will become particularly useful in stacked die configurations, where thermal issues are not always intuitive. (See fig. 2)

Fig. 2. Source: Apache Design Solutions

None of this will get solved quickly. For one thing, power models generated by memory makers may be different than those generated by foundries and IP vendors, which is where standards will become important. But the first step is creating a dialog and generating tools that can provide visibility inside and SoC, and so far at least that seems to be happening.



Leave a Reply


(Note: This name will be displayed publicly)