Power-Delivery Network Challenges Grow

Up-front planning becomes a must at 45nm and below to deal with escalating complexity and tradeoffs.

popularity

By Ann Steffora Mutschler
Physics is forcing convergence in the SoC power delivery network, whose job is to ensure that every device on a chip has a robust and stable voltage so it can meet its expected functionality and timing.

In the past, chip design, package design and board design were separate disciplines, guard-banded to ensure that all the parts worked well together. Today, given the extreme complexity of SoCs, working separately is no longer a viable strategy and particularly when designing a power grid for sub-45-nm, there are key considerations:

1. The highly fragmented power grid.
One of the most challenging issues in SoCs is they don’t have a single power domain anymore, and the number continues to grow at each process node.

“In the past, it was common to have one power domain for the entire chip, so it could have a massive amount of metal routing in every layer. Since it was continuous and homogeneous, it allowed for a lot of redistribution,” explained Aveek Sarkar, vice president of customer support and product engineering at Apache Design Solutions. “Now there are multiple domains all sharing the same piece of silicon and routed through the same package. There can be more than 100 power domains on a chip and these tend to be highly fragmented. People really cannot overdesign this type of a grid and sign them off using older techniques like static since you do not have the metal resources to overdesign each of these domains. You need to analyze them with the right techniques (time and frequency domain) to optimize the resources without jeopardizing the voltage drop.”

2. The growing use of power gating.
Power gating has proved to be an effective method of reducing leakage, but solving one issue creates others.

“Power gating highly disrupts the power grid because all of a sudden, you have to pin the current all the way down to the diffusion of the substrate or the silicon and then bring it back again to the power grid,” Sarkar noted. “Essentially, this introduces choke points in your power grid network, and that has to be designed carefully because if you put too many of them the whole purpose of having them in terms of power savings goes away. If you put too few of them, then you’ve made the voltage drop much worse. And if some of them fail or break and never get fabricated, you’re even worse off. On top of this Sub-40-nm grids tend to be more resistive so the overall grid with the power gates have to be carefully design.”

3. Dynamic voltage and frequency scaling
DVFS has become especially important at 40nm and beyond because voltage scaling reduces the power supply for those parts of the chip that don’t need to run that fast. For example, if you choose to run your phone at an extended battery life condition the supply voltage that goes in is scaled down. But the threshold voltage doesn’t change for this lowered supply voltage.

As such, the noise margin that devices have reduces considerably. To account for this, the design must be simulated for various operating and PVT conditions to make sure that the supply voltage reduction from simultaneous switching and Ldi/dt noise doesn’t affect the timing and functionality of the devices.

“On the fabrication side we are putting more devices together on the same die. Hence the power density increases – so the overall temperature of the die increases – so the electromigration limits reduce considerably increasing the likelihood of the wires to fail,” Sarkar said.

“We assume that people can use a standard flow to deal with that,” said Qi Wang, technology group marketing director at Cadence. “The most challenging issue from the low-power design perspective is that people are getting more aggressive to apply low-power design techniques like multiple supply voltages, power shut-off, which actually makes this problem much worse.”

Traditionally this was nothing more than a bunch of wires. “Now you are including transistors as part of the power distribution network and that transistor itself has IR drop, not just resistance but it is an active device,” Wang pointed out. The trick of this power delivery with power gating is in shutting off a domain and powering up a domain. The response time must be fast, but it can’t be too fast. “If it’s too fast, you suddenly draw a big current within a short amount of time through the power delivery network, it will cause significant IR drop, and the chip may never be able to power up.”

Because of this, the power distribution network must be carefully designed such that it does not cause too big of a rush current. A sudden peak current causes IR drop and if that drop is too big, there is a chance the design will not be able to be powered up at all.

“The way to control the rush current is to add a lot of switches, but this causes another problem in that the switches are not free,” Wang explained. “Not only because of the silicon, but because of the power. That shut off switch is the single biggest leaking element on your chip. On one side, you try to control the dynamic behavior (the rush current) so you put in a lot of switches, but if there are too many switches, then it causes problems for the static state where itself is causing leakage. So you really need to balance.”

Board-level issues
Below the IC level at the PCB, low-power initiatives have been driving trends, observed Steve McKinney, market development manager for analysis products in Mentor Graphics’ board division. “They propagate issues onto the PCB because when you look at older technology where you had maybe a 3.3 volt or 1.8volt part, that was the core power for the whole part. But with low-power, you have multiple voltage cores in a single IC and the I/O has different voltages than the core voltages. So now you have all these different power supplies that are demanded to supply the power to the ICs from the PCB. In the past, a designer could have a solid layer in their stack up that was 1.8 volts or 3.3 volts, but now that layer has to be split up into a bunch of different islands so that’s causing issues for various reasons.”

As a result of parts becoming lower voltage, many times the current is increasing. The combination of the two can mean less tolerance for things like voltage drop or voltage ripple, thereby driving the use of tighter design constraints to maintain reliable power. At the same time, the design quality is lowered in general. Instead of having a solid plane with a lot of copper this filled area is necked down into a region where copper area gets reduced, so current densities begin to go up. This could lead to thermal breakdown of the dielectric and ultimately complete failure of the board design, he explained.

The best ways to handle the issues should be driven by up-front design planning. One area of concern for power delivery is that as cores switch, they are demanding current from the power rails on the PCB. As they are switching, that switching behavior creates the voltage ripple on the power pins, which ultimately could lead to failures of I/O or the core not operating as intended. “Through proper power distribution design on the PCB and especially in the decoupling design area, that is one of the additional areas of focus beyond just the DC where you have to understand whether to place the power and ground layers close together or can they be separate from each other,” McKinney said.

If this sounds unbelievably complicated, it is. But there is a silver lining. “The good thing about it is that you have multiple things working against each other so you can find the optimal point,” said Cadence’s Wang. “There is a point that is optimal and that once you get it right, you have a good way to estimate power before you even lay out the power distribution network.”



Leave a Reply


(Note: This name will be displayed publicly)