New Power Standards Ahead

But big gaps remain in flow; standardized power modeling still missing, along with a systematic way to communicate between hardware and software teams.

popularity

By Ed Sperling
Standards groups are beginning to look at power and other physical effects much more seriously in the wake of the dueling power formats—UPF and CPF—that have caused angst across the design industry.

To put it in perspective, when CPF and UPF were first introduced power was something of an afterthought in design. At 65nm it ceased to be something that could be dealt with later in the design process, and at 28nm it has become an essential part of the architecture. But as battery life, mobility, and energy costs even for plugged-in devices become overriding concerns, power now needs to be considered at full system level, which could mean everything from a rack of servers to an automobile.

Much of this is being driven from the chip level, and in the software that manages chips and interactions between chips. There are at least a half dozen new standards efforts under way or on the drawing board. Most heavily leverage the expertise of chipmaker and where they have encountered or expect to encounter pain in designs, most notably in stacked die or in planar SoCs below 20nm, or from tools vendors that have gained expertise in a specific area.

Si2 currently has one standard in legal review for system-level power modeling. The standard is called “atomic” power modeling, based on the assumption that the model cannot be broken down into smaller pieces, although it can be used at various levels of abstraction.

Also in the works is a standard for co-design, which is one of the most difficult challenges facing design today. While hardware engineers are well versed in how to build an energy-efficient chip, that engineering effort can be wasted if the software running on an SoC isn’t energy-efficient, as well.

“The first step is to get there with the architectural ESL level,” said Steve Schulz, president and CEO of Si2. “Then, we will look at how the software runs and develop a bridge. You will never get the software community to adopt the hardware approach to design. That community is 20 to 30 times larger than hardware engineers and they have their own tool flows. We have to think about a minimally intrusive solution. We’ve called it a bridge to the software world, and if it’s not intrusive then the software teams will use it. Most of them will never understand concurrency and how to get to a GDS II stream, but there are characteristics that are reasonable proxies of the details. You don’t simulate all the code, but you do generate enough discrete choices so everyone can get on the right track for power.”

A first step in that direction is finding data objects that can be passed back and forth between the software and hardware teams. From there a power model will need to be created across both. The power-flow group within Si2 has been reactivated to develop a source for the power model. “The focus this year will be hardware,” said Schulz. “In 2013 we will turn our attention to understand the data objects stored.”

That puts the likely adoption timeframe a of a co-design framework for power in the 2015 time frame—roughly at the 14nm process node and at a time when 2.5D stacking is expected to be mainstream and 3D stacking will become more commonplace.

Stacking effects
“There are two new requirements for design,” said Andrew Yang, president of Apache Design. “The first is a 3D IC flow. The second is an RTL-to-gate power methodology.”

Included in the 3D requirements is the need for multi-die thermal and stress analysis. Yang said the key is the amount of current a design can sustain without failure over time, and it gets worse at advanced nodes and sometimes in stacked configurations because wire handling capability is decreasing, power density is increasing, and electromigration is increasing.

3D IC thermal stress analysis. Memory die is impacted by power distribution of logic die. Source: Apache Design.

“This can be a safety issue,” he said. “You need to make sure the metal topology is handled correctly. Electromigration is affected by heat. The hotter it gets, the less current a metal wire can sustain. The electromigration rules are increasing, which is why GlobalFoundries, Intel and TSMC are all coming up with complex electromigration rules.”

Front to back, back to front
Being able to get a chip out the door at all is a challenge, which is why there are more standards being dictated from the foundries these days. In addition to process variation, continually shrinking geometries is making it harder to obtain adequate yields as quickly as in the past. That has led to more rules for place and route, test, IP, and layered across all of those is power.

“We’re seeing it in the available sizes, speeds, memory and logic cell sizes,” said Chris Rowen, CTO at Tensilica. “That’s what we target—area, power and process compatibilities. Whether that’s stacked or conventional die is affected only subtly. But with die stacking you will see significantly higher bandwidth and less latency, which will have an effect on modeling of the system. It’s not a qualitative change, but it is a quantitative change. It won’t change how one DSP communicates with another, but it will change how DSPs communicate with memory.”

How much of that will be standards promoted by standards bodies versus de facto standards from the largest foundries remains is unknown. Also missing are good open standards for on-chip debug and trace, said Rowen.

ESL standards
One of the most glaring holes in all of this is at the ESL level, where standards for power models are non-existent. While this isn’t a big problem in a single vertically integrated company, it’s a huge problem in a disaggregated supply chain where various companies work on designs—something that will become even more pronounced in stacked die where subsystems at different process geometries need to be integrated with other subsystems.

“What’s missing is something that allows companies to exchange power models, especially for IP-based designs” said Ghislain Kaiser, CEO of Docea Power. “In an ideal flow you would be able to take the IP from the IP suppliers and put together a power model and assess the power impact on the underlying hardware. But you also need to have interoperability between suppliers and customers that goes beyond the semiconductor level. It has to be optimized at each level—the SoC, the chip set, the PCB and above. So there won’t be only one number.”

The accuracy of those power models also will shift throughout the design. At the beginning a model may be only 40% accurate, but at the end it may need to be accurate to plus or minus 5%, Kaiser said.

Other pieces are missing, as well. Kiran Vittal, senior director of product marketing at Atrenta. “Right now, when a designer uses memory they don’t realize the code they are writing is not optimized for power. When you read memory you get a redundant read. The controller code isn’t optimized for memory. And all of that has to be networked, because you may have as many as 2,000 memories in a design. If you do it right you can save about 20% of the memories and the power needed to run them.”

To show just how bad this can get, a large systems house was designing a chip was required to give an early indication of its power budget to the OEM. The OEM used that estimate for calculating its own power budget and came up with a spreadsheet that represented the total design. The problem was that the spreadsheet ultimately was off by 100% in its power estimate, which in turn caused problems with the final device and greatly increased the amount of time it took to successfully bring a product to market.

“A lot of the ESL tools today know performance and area, but they don’t have a clue about power,” said Vittal. “This is fertile ground for innovation.”



Leave a Reply


(Note: This name will be displayed publicly)