Power Delivery Networks

Ensuring a robust PDN for low-power designs: Part one


By Preeti Gupta
That power is now at the forefront of semiconductor design is no secret. It is also true that lowering power consumption drives product competitiveness and green technology—even more so in today’s mobile-driven market. But the same drive for lower power also increases the complexity of ensuring the power integrity of a system-on-chip (SoC).

The power delivery network (PDN) design and verification must now have the capability to withstand the current demands across a multitude of low-power modes, as well as the transitions between the modes. An optimal yet robust design of the PDN on the chip, the package, and the board/system is crucial to ensuring that the chip will function at the desired frequency. Right-sizing the package is also important to secure profit margins. However, by the time the data is available for reliable PDN and chip-package co-design, it is already too late to make changes without incurring a significant schedule impact.

Clearly there is a need to re-tool traditional methodologies for a more robust and realistic PDN that can handle the complexities of current designs. Traditional guesstimates for early power estimation and gate-level, simulation-based signoff methodologies are no longer viable. To minimize cost, ensure competitiveness and mitigate risks of voltage-induced silicon failures, the PDN design must have accurate power numbers and worst-case current scenarios early in the design flow.

Early power estimation
Traditionally, design teams have relied on spreadsheets to estimate power consumption. Such power estimates then guide the PDN, including the on-chip power grid (pitch, pads, floorplan), as well as the package (metal layers, decap). The spreadsheets are either based on a previous version of the design or coarse calculations based on an estimated number of devices. However, this approach is increasingly error-prone and also limited in its modeling of the effects of shrinking geometry and complex low power techniques.

Worst-case current scenarios
Typical PDN design and power estimates are also limited in their accounting for the worst current demands of the design across all modes of operation. With the advent of advanced low-power techniques, the problem is further exacerbated. A sudden current spike can happen as you shut off the clock gating for a block and then can couple with the package inductance, causing a significant voltage drop and resulting in a timing failure. So, it is not only a high current demand but also a large change in current that must be accounted for in PDN verification.

However, traditional gate-level simulation-based and vectorless methodologies are not adequate enough to identify these issues. Gate-level simulations are either available too late (often after design tape-out), have limited coverage, or are no longer being run due to the difficulty associated with bringing them up. Vectorless methodologies will provide additional coverage, but will be limited in identifying large current change scenarios. The following figure for a typical processor illustrates the complexity of the various power modes and power mode transitions that PDN design and verification must consider.



Traditional tools and methodologies for PDN design must be re-examined and augmented in order to help ensure a robust PDN, especially in the wake of advanced low-power designs. Realistic power numbers early in the design flow are required to enable early power grid prototyping and chip-package co-design, before the chip layout is available. And switching conditions when the chip draws the worst current must be identified and analyzed for guarding against voltage-induced silicon failures.

We will discuss emerging tools and methodologies that address these requirements in part two of this blog next month.

–Preeti Gupta is a senior technical marketing manager at Apache Design Inc. (a subsidiary of ANSYS).

Leave a Reply

(Note: This name will be displayed publicly)