New Approaches To Power Decoupling

Strategic placement of decoupling capacitors and regulators is critical to keeping power stable, but no one approach is sufficient.

popularity

Decoupling capacitors have long been an important aspect of maintaining a clean power source for integrated circuits, but with noise caused by rising clock frequencies, multiple power domains, and various types of advanced packaging, new approaches are needed.

Power is a much more important factor than it used to be, especially in the era of AI. “Doing an AI search consumes 10X the power that a standard Google search does,” said Tim Phillips, CEO and cofounder at Empower Semiconductor. “Twenty percent of data-center power demand is going to be AI by 2028, and data centers will be almost 10% of global electricity usage by 2030.”

Chip designers must take voltage noise and droop into account, adding guard-bands for situations where voltage may veer off course. “If you can have enough bandwidth to keep the voltage flat over all changes in processor activity, you can get faster throughput and operate with less voltage margin, which means lower power with better chip performance,” Phillips added.

Decoupling effectiveness and filtering frequencies rise as capacitors move closer to the power pins they’re protecting. Historically, lower frequencies have been more tolerant of slightly longer distances, but large chips with hundreds of power pins and multi-gigahertz clocking are putting the squeeze on decoupling. Capacitors are now working their way deep into advanced packages even as new regulators may eliminate some of those caps.

A decoupling hierarchy
The role of decoupling capacitors (caps or decaps) is simple — filter out noise on the power line. The challenge is that noise can exist at multiple frequencies, and a given cap can filter only part of that range. A few decades ago, when you put two caps on the board next to the chips you were protecting — one for low frequencies and one for high frequencies — it worked like the suspension in a car. The springs smooth out lower-frequency bumps while the shock absorbers filter out the high-frequency shocks.

But today’s applications require multiple caps in many different places in efforts to keep power as clean as possible. “There’s decoupling in the package, there’s decoupling under the printed circuit board (PCB), and then there are bulk caps out by the regulator,” said Phillips.

Those locations make up something of a hierarchy, with different capacitors handling different frequency ranges. An overly simplistic description would have the following levels of caps and the frequencies they address:

  • Bulk caps near the regulator, which filter in the range of tens of kilohertz;
  • Caps under the chip, which filter up to the tens of megahertz;
  • Caps in more advanced packages, which can filter hundreds of megahertz, and
  • On-chip caps, which filter gigahertz noise.

Fig. 1: Decoupling capacitor hierarchy. The capacitors filtering the highest frequencies are in the chip itself, with additional ranks possible in the package, under the package on the PCB, and near the regulator. The arrangement and frequency ranges are simplified for the purposes of illustration. Source: Bryon Moyer/Semiconductor Engineering

That means the pair of caps employed in the past is now replaced by a spectrum of caps, and the middle range has proliferated. “The frequency band from a few hundred kilohertz up through 10, 20 megahertz was a real dead spot that you couldn’t solve on-die, and the regulator can’t handle it,” said Phillips. “So they’re trying to use this passive network all the way through the transmission line.”

The type of capacitor at each level is constrained by size. On-die caps must remain small so as not to consume too much expensive area. “These days they’re almost all MIM [metal-insulator-metal] or MOM [metal-oxide-metal] capacitors, and designers sometimes dedicate an entire layer for the capacitors,” said Marc Swinnen, director of product marketing at Ansys.

Those caps are essential for the highest frequencies. “At five to six gigahertz, nothing down on the package or board level can do anything,” noted Steve McKinney, technical solutions manager for IC packaging and analysis at Siemens EDA. “You really can’t expect to get much from the board [decoupling caps] beyond about 100 to 150 megahertz, and that’s even a stretch.”

For off-die caps, placement under the chips on the backside of the PCB has been a popular location for a while. With advanced packages, however, caps are moving inside the package. “We’re seeing people placing the capacitors at the interposer/bridge level,” said Chris Ortiz, senior principal application engineer at Ansys. “There are also deep-trench capacitors, which have somewhat of a slower reaction time than intentional decaps on the die. While the routing density is pretty high [on a 65nm silicon interposer], it’s still not as high as that on an actual die. So there’s room to put a lot of these capacitors. With organic interposers, we see some larger capacitors, but they don’t have quite the same density or quite the same reaction time.”

A sequence of discharges and recharges
These caps provide temporary charge when necessary to keep the voltage steady. When a quick change in current occurs, the regulator isn’t going to be able to respond fast enough to supply that current without the voltage drooping. That regulator is far away, and it’s effectively on a transmission line. That means without any caps, a change in current demand takes time to travel back to the regulator. Then the regulator must respond and put out more current, and then that current must travel back down the line to the chip that demanded the extra current. This process takes time, so while that is happening, the local voltage will droop until the regulator’s reinforcements arrive.

The caps store energy that can serve during those short but critical delays. With the on-chip caps, for instance, if a new clock domain starts up, then it will create new current demand with frequency components tracking the clock rate — which is several gigahertz on the fastest chips. “As that clock is switching, it needs power right away,” said McKinney. “If you have to wait more than a couple of picoseconds for the power to show up, then the clock is going to start drifting, and you’re going to have other problems.”

However, those on-chip caps only can handle low-current demand. “When an entire block suddenly becomes active and pulls down, this is not something that a local capacitor has enough charge to offset,” he said. “So you need bigger capacitors [off-die], where they don’t react as quickly, but they have a lot more charge.”

Summarizing then, on-chip caps filter the highest frequencies. But once those caps have discharged, they must be recharged. That demand moves upstream to the next set of caps — perhaps in-package — which can restore the first caps’ charge faster than the regulator can. But that second rank of caps then must be recharged by the PCB caps. Ultimately, the regulator will restabilize everything at the new operating point — until it changes again.

Critical parameters for effective decoupling
The value of the capacitors is clearly critical for establishing the frequencies they filter, but it’s only part of the picture. Ultimately, there’s a loop with parasitic inductance and resistance, both of which strongly impact decoupling effectiveness. The inductance plays into the frequency, and the resistance contributes to loss.

Cap placement is critical to reducing loop inductance. “Decoupling capacitors perform best when their connections are as short as possible to minimize inductance and resistance,” said Dick Otte, CEO of Promex Industries.

The higher the frequency, the closer the caps must be. Ideally, they would all be on-chip, but the larger caps necessary for lower frequencies would be too large, and so they move off-chip. The next closest location is in the package near or under the die if the chosen package can support that. Beyond that is the area under the package on the PCB. The biggest caps lie near the regulator.

The lossy aspect relates to current and resistance. Because power is proportional to resistance and the square of the current (I2R), reducing loss can involve keeping resistances low — but more importantly, minimizing current. Reducing current for a given level of power is possible by keeping voltages high for as long a distance as possible, then dropping them down to the operating range close to the target chip. That shortens the wires that experience the highest current.

“If you think about a 1,000 watt processor/ high-bandwidth memory combination, where the core might be operating at 1 volt, that’s 1,000 amps you’re having to bring in from the outside,” observed Eelco Bergman, chief business officer at Saras Micro Devices. “And if you think about the losses, you square that amperage, and then you take the resistance in those traces that are being fed into the chip. There’s a lot of efficiency loss.”

Within an advanced package, small caps are often placed on the package substrate or interposer, but that’s still sub-optimal.

“For years, decoupling capacitors have been incorporated into multi-chip packages, where both the capacitor and die are mounted on the same planar surface,” said Promex’s Otte. “However, in this setup, most of the die’s power contacts can be several millimeters away from the capacitors, limiting efficiency. A more effective solution is to place the decoupling capacitors directly beneath the power-consuming die. Traditionally, this has been achieved by mounting the capacitor on the opposite side of the substrate, directly under the die. This method works well, especially when the die is mounted using flip-chip technology or copper pillars, since the only separation between the capacitor terminals and the die’s power contacts is the [substrate] thickness.”

With built-up substrates and interposers, the caps can be embedded to reduce that thickness even more. “Companies are starting to embed discrete multi-layer ceramic caps and other devices into substrates,” said Bergman. “But these components vary in thickness or have different sizes, and they’re discrete. So you end up embedding many discrete components into the substrate supply.”

This technique can involve as many as 100 small passive devices, and any one of them failing causes a board failure. So the added components can become a board-yield issue.

Graphcore took an unusual step to bring more capacitance closer to its AI-processor circuits by using a wafer-on-wafer approach. “They’re using deep-trench capacitors in [one wafer], and then they hybrid bond their active die directly to that [wafer],” said Bergman. This allows them higher performance, since they can rely on a sturdier supply with caps filtering in the hundred-megahertz range. It’s an expensive solution, however, viable only for applications that can sustain premium pricing (such as AI inference).

An in-package capacitor bank
Saras has a different solution for the capacitors in an advanced package. Rather than using multiple individual caps, the company embeds capacitors into a module that can be embedded in the package-substrate core prior to building up the remaining layers. It filters up to about 10 MHz.

“Envision a mini substrate with embedded components that would then live in the die’s shadow,” said Bergman. “It gets into the substrate core layer before the build-up layers.”

Fig. 2: Saras’ Stile capacitor module is embedded in the center of a package substrate. Source: Saras

Current interposers have plenty of thickness for such a module. “These cores are quite thick today — 1.2mm-plus,” said Bergman. “Glass interposers are thinner, but they’re still 50µm. A single layer of our capacitor materials takes about 20µm. The more space we have, the more we can stack, and the higher the capacitance.”

While this may contain as many capacitors as might have been employed individually, the fact that this comes as a pre-tested module helps reduce any substrate yield loss that might have resulted from any of the individual caps. Saras typically designs a custom module for each application, although off-the-shelf sizes are possible.

The primary change to manufacturing is that the substrate provider must provide room for the module. “The first thing they do is the through-via drilling,” said Bergman. “Right at that stage is where the cavity would be routed for our tiles.”

From there, the assembly house integrates the tiles. “You place a backing film across the back of the panel,” he explained. “You place the component in that cavity, and then you do a perimeter fill. Once it’s placed and is planarized, the backing film can come off, and they have a core that’s ready for redistribution-layer deposition.”

Saras tries to minimize changes to manufacturing processes. “We work with the substrate suppliers to develop those processes,” noted Bergman. “But embedding is something they do today, and we actually want to keep that as standard as possible.”

An active solution?
While many chips today rely on passive devices such as caps, active power management is starting to get a lot of attention. Active power management helps reduce power consumption while a device is in use. Adding AI into the equation takes that a step further, identifying when circuits can be powered down or clocks are gated.

“AI-based active power management is another serendipitous sort of virtuous cycle,” said Stelios Diamantidis, head of the Generative AI Center of Excellence at Synopsys. “The better term is resource management, with power of course being the ultimate resource. But general active power management is something that we will continue to be looking at very, very carefully for advanced chips going forward.”

Active power management was a popular topic at this year’s Hot Chips 24 conference due to the massive amount of power that needs to be managed in AI and high-performance systems. Caps, in contrast, are a passive approach. They simply store and release charge. Their value lies in making up for the delays in the regulator’s ability to respond quickly to changes in demand. But if you had a fast enough regulator, you might not need all those capacitors. That regulator would actively provide the necessary current on-demand rather than relying on caps to patch the holes in the current delivery.

That’s an approach Empower is taking with what it calls a high-bandwidth regulator. For those thinking in terms of older, simpler analog regulators, the notion of bandwidth might sound odd. But in the world of switching regulators, it has to do with how quickly things switch internally and how fast they can react to demand changes.

A typical PCB regulator setup places the regulators as close to the chip it is driving as possible, given the significant routing restrictions. The more regulators needed, the farther away they must reside. And their current travels to the chip laterally on the PCB traces, which Empower says is a problem.

“The PCB is a terrible place to conduct current, particularly laterally,” said Phillips. “As the power level grows, they have to keep adding more and more rows of power outside of the chip, and the power gets further and further away. The further away you get, the more power is lost and the bumpier the voltage gets.”

Instead, Empower’s regulators can replace the caps under the chip they’re driving. “You shrink all of those components [traditionally] on top of the board, make them super thin so they can fit under the board (because there’s a height restriction), and then make them fast enough that you don’t need any of those capacitors underneath,” explained Phillips. “You can place all power chips underneath the GPU where those caps were and free up the entire top side of the board.”

In addition to taking advantage of finFETs, the company designed — or co-designed — its own passives. “We created new high-frequency magnetic materials and the way they’re structured to fit with how [our regulator] likes to perform,” said Phillips. “And the same thing with these wide-bandwidth capacitors.”

Part of the key here, according to Empower, is that the current from their regulators travels vertically through the vias rather than laterally on the board. “When you move things underneath versus up top, you’ve eliminated that transmission line by about 90%,” he continued. “You get a 10X improvement in resistance and inductance that enables you to unlock bandwidth, performance, and regulation accuracy. But it also eliminates all that resistive loss between the two.” The company claims it can save much of the 20% of power lost in the traces.

Fig. 3: Empower’s regulators are mounted under a chip to provide power that flows vertically through vias. Source: Empower

Each regulator, which Empower calls a “leaf,” can deliver around 60A and can be replicated to provide additional current. A separate supervisor chip manages the leaves. “You can plop down as many blocks as you need, depending on the power levels of the processor,” said Phillips. “You can get over 3,000A right up underneath the core of the processor. The main limitation comes from the number of regulators that our supervisor can support (50) on the same power domain. The management of the addresses is a concern but so is the latency to get telemetry data (current, voltages, temperature) from each leaf.”

The idea, then, is that rather than having caps handle the immediate current demands, the regulator itself is able to actively change its current delivery to meet those demands far faster than has been typical.

The company also is working on other frequency tiers, potentially placing regulators inside a package as well. But that’s for the future, and it hasn’t announced formal plans to do so.

Potential challenges for backside power
One upcoming development may change on-chip decoupling. Today, power comes in through the top metal layers, and the oxide separating power and ground metal lines can serve for decoupling caps. But that power chews up space in the metal layers that could otherwise be used for routing signals. So delivering power from the backside of the wafer is a new approach under consideration. “Instead of the power being distributed along with the signals as part of the metallization, you put all the power on the backside of the chip,” said Ansys’ Swinnen. “Then, with through-silicon vias (TSVs), which TSMC calls Super Power Rail, you poke up back into the active layer and then connect to the transistors there. That frees up a lot of space since 25% to 30% of your metallization is the power supply.”

Fig. 4: Top-side power takes advantage of dielectric between power and ground planes for decoupling caps. If power is delivered from the back of the wafer, those planes aren’t in place, and the vias carrying the power to the frontside are resistive. High-frequency decoupling may bring a new challenge. Source: Bryon Moyer/Semiconductor Engineering

Freeing up the top-side routing comes at a cost, including the obvious cost of etching the power TSVs and then thinning the back of the wafer to expose the power and ground contacts. One concern is that those TSVs don’t connect to the same power and ground planes with intervening dielectric layers from which to form caps. “The power vias can be resistive,” said Swinnen. “Depending on how resistive they are, you will lose some of the benefit of having low noise on backside power. But to what degree that negates any benefit, we don’t know yet.”

Conclusion
Given how important power has become — even surpassing performance as a primary consideration in many systems — efforts to identify savings in unexpected places will continue. The overall design process has become more complex, however, because what were distinct design domains — chip, package, PCB — must come together. “The packaging guy is having to work with the die designer and with the board designer trying to provide enough power,” said Siemens’ McKinney. This can be a particular challenge with extremely large processing chips. “From a tool perspective, that’s a huge amount of data. Our analysis and simulation tools haven’t been challenged in that way before.”

Efforts to put regulators closer to the chips they supply can reduce losses both by shortening traces and by delaying voltage step-down as much as possible to reduce current. Higher-speed regulators, if they fulfill their promise, may reduce the need for decoupling in certain frequency ranges by replacing passive capacitors with active devices that can change current delivery fast enough to obviate decoupling in that range. Solutions such as the Saras module and Empower regulators will be vying for the same real estate, so cost, performance, and execution will determine the winner in each application.

Some caps will remain, notably the on-chip capacitors. Bulk caps located away from the chips are typically easy and inexpensive to include. It’s in the middle range where changes are happening, either by delivering caps in a new fashion or improving power delivery. If they all prove out, these developments will help to reduce power, raise performance, and shrink system sizes.

Related Reading
Power Delivery Challenged By Data Center Architectures
More powerful servers are required to crunch more data, but getting power to those servers is becoming complicated.
Voltage Drop Now Requires Dynamic Analysis
Once a manageable effect, voltage drop is causing more problems at lower nodes.



Leave a Reply


(Note: This name will be displayed publicly)