Power Impacts On Advanced Node IP

The move to advanced process nodes is causing variations due to manufacturability, as well as impacting the power in the design.


By Ann Steffora Mutschler

With the move to the 28nm or 20nm process nodes, SoC engineering teams are seeing a significant amount of variations due to manufacturability.

To reflect how a design element will be printed on the wafer, foundries offer many libraries with multiple corners for different voltages, timing and temperature, among other things.

“At 28nm what we are seeing is a lot of difference on what kind of libraries engineering teams are using downstream during synthesis. A lot depends on the Vt mix. For example, different libraries have different leakage power characteristics and so on, so at the back end they decide that for a certain portion of the chip, say 70%, they’ll use this particular Vt mix, and for the remaining portion use some other library with a different Vt mix. That information is now required at the RTL also,” observed Kiran Vittal, senior director of product marketing at Atrenta.

Manufacturing variations also are impacting IP power, which is particularly problematic because IP can have many configurations. So with ARM IP, for example, the power can change depending upon the mode of operation. Testbenches are created to deal with the highest activity in the chip—sometimes called ‘BBQ mode’—to essentially burn the chip in order to determine the boundaries of the power.

Gene Matter, senior applications manager at DOCEA Power, agreed that transistor libraries are being very, very targeted for the level of performance that they need. “You have high-speed transistors; you have low-speed transistors, high Vt and low Vt transistors. There are also low leakage libraries. You can get high speed and good density. You can get low power and good density. But you can’t get high speed and low power at the same time. The fabs build transistor libraries that are targeted to the IP that you want to build.”

What this means is that as you deal with process and fab vendors, for a given wafer you can pick a set of transistor libraries, but if you’re trying to mix both high-speed, low-power stuff and low-speed, high-density stuff, you have to make some compromises in the transistor libraries that you’re going to fabricate on. “If you want the really, really high-performance stuff you’re going to have to put in more power mitigation techniques. If you want the lower-power, high-density stuff, you’re going to have to move to slow and wide interfaces as well as slow and wide logic, which are going to increase your cost and area,” he continued.

And these are tradeoffs that engineers are making today. They must carefully consider the IP blocks for very large SoCs, which is a lot of mixed stuff, at the right synergy between the operating parameters of all these IP blocks so they map to the right process technology.

“They have to partner directly with the process technology and characterization libraries to get the new transistor libraries built that meet their performance characteristics. And then to mitigate these techniques, they either need to move to slower and wide interfaces or lower-power applications or more fine-grained power gating and clock gating and leakage mitigation techniques when they’re using this high speed stuff,” Matter explained.

Also, the IP needs to be mapped into the new process libraries and transistors, which come from the fabs and foundries. The IP also needs to be matched to the workload and application—the way in which the application generates activity back-end behavior through the IP block. “If you keep doing things the old way you aren’t taking the best advantage this new marriage between rearchitecting the application based on what’s the most energy-efficient computing through this IP block and mapping the IP block relative to the affordability and density and cost for energy efficiency of the transistor library to build that IP,” he asserted.

Navraj Nandra, senior director of product marketing for analog and mixed signal IP in the solutions group at Synopsys, noted that when engineering teams are designing SoCs, in the past they would try to get the power consumption down by reducing the overall voltage level of the chip. “If you did that it would reduce the voltage and eventually reduce the power consumption. It got to a point where the Vdd [supply voltage] was getting to below the voltage where things could actually work. We had customers asking us for standard cell libraries that could operate down to 0.6 volts, for example, which is really close to them not working. That was the thinking a few years ago.”

At present, SoC architects seem to have given up that idea and instead are embracing the concept of different voltage domains, whereby when they need to have that part of the chip on they have a burst of activity at the normal voltage level. Then they switch that activity off and it goes into sleep mode, then it goes back on again when that function needs to do something, he explained. “So you have a burst of activity, a quiet period, then a burst of activity. The power saving is achieved by being able to switch on and off these various components when they’re not needed. The challenge on that side of things is that you need to have the blocks on or getting on just when you need them. The tradeoff is really latency with the amount of power you can save.”

Multiple paths
To solve the variation problem from the power angle, there are multiple routes one can take, according to Qi Wang, technical marketing group director for low power and mixed-signal at Cadence. “First of all, you want to build up more accurate models. Once you build up accurate models, you can have a better estimation and can tighten up the margin. But that cannot change too much on the design methodology point of view. You still take the margin in the design considerations then you want to tighten up the margin. There is a limitation of what you can do.”

Variation also can be addressed with design techniques, he said. “We already see some techniques at the 45nm node that basically use on-chip sensors to control the circuit behavior either using body biasing or using some kind of dynamic voltage and frequency scaling. Along that line, there are some companies that provide on-chip sensor IP to provide feedback about the variations of the chip as a runtime to control the circuit.”

Because the process is causing these issues, it could make the most sense to address variability from that angle. However, Wang concluded, “From the very beginning, the fight for performance, power and area includes all the angles—process, IPs and the EDA tools. The reality is that it’s become even more true now, and in the future people will, as their cost budget allows, squeeze out the last bit of power. In most applications people are doing right now, power becomes a technical metric to impact their competitiveness. In terms of wireless, there’s the talk time. For automotive, it’s about reliability. It’s not just one angle. It has to be multiple angles.”

Leave a Reply

(Note: This name will be displayed publicly)