Analog Hits The Power Wall

40nm is the breaking point on analog design; engineers have to grapple with the same problems as digital teams, plus a few other surprises.

popularity

By Ed Sperling
Analog design teams are starting to encounter the same physical issues that digital design engineers began wrestling with several nodes ago—only the problems are more complicated and even more difficult to solve.

At advanced nodes digital circuitry is susceptible to an array of physical effects ranging from heat, electromigration, electromagnetic interference and electrostatic discharge. Analog is susceptible to all of those plus noise and voltage fluctuations caused by process variability, and the amount of circuitry needed to prevent all of these problems can significantly add to routing congestion, impact performance and add greatly to the number of corner cases that have to be checked in verification.

This helps explains why many analog companies have avoided moving to new process nodes whenever possible, particularly for standalone devices such as A-to-D converters and power supplies. Processes that are developed and tested at 0.25 and 0.18 microns are quicker to develop and pose far fewer problems. They’re also much cheaper to keep going for longer periods of time, which is why most analog developers still own their own fabs and use smaller wafer sizes.

But even analog developers with separate devices are starting to feel some spillover effects. As SoCs continue to shrink feature sizes, noise and heat are affecting other components on a PCB.

“SoCs are running into power density issues and running at higher frequencies,” said Bob Dobkin, chief technology officer at Linear Technology. “Digital noise is a problem. Getting the heat out is even harder.”

EMI is less of an issue, because most of the devices that are on a PCB are well shielded from these kinds of effects. And ESD is a non-problem if the board is laid out correctly, Dobkin said. But electromigration can be a problem, depending upon the power density.

On-chip analog issues
When analog is on the same die as the digital circuitry, the situation is much worse. The problems really began kicking in at 40nm, which is now considered one of the mainstream process nodes for SoCs.

“One of the problems we’ve seen involves very low supply voltage,” said Hany Elhak, senior product marketing manager in Synopsys’ Analog/Mixed Signal group. “It could be 0.85 to 0.95 volts. The threshold voltage of the transistors is 0.6 volts. That leaves a very small amount of room to operate.”

Elhak said another problem involves variability in the supply voltage. “Because of that variability you need to add digital loops for calibration. There are so many loops these days that they’re causing problems for the digital designers, who don’t want to have to worry about all the corners and the calibration.”

That, of course, further complicates routing on an SoC. Put these and other problems together and it becomes apparent why there is suddenly concern in the analog community. At 65nm, analog design could still be done the way it has always been done—with spreadsheets. At 40nm, there is a realization that something very fundamental has changed in this world.

Pete Hardee, low-power design solution marketing director at Cadence, found that out during a recent trip to Europe, where analog design is booming for markets such as medical devices, smart cards, RF and automotive.

“The problem is that the analog side is not scaling,” said Hardee. “We’re already seeing an impact on noise immunity. In the past, you used to be able to spread the clock edges out to decrease the digital noise. You can’t do that anymore. It’s also harder to do deep n-well biasing, which is another technique to reduce noise.”

Power is another problem. Leakage began overtaking dynamic power in digital designs at 40nm. It’s causing similar problems in analog now, but the approaches to dealing with it are different.

“The power and ground are explicitly described in analog, but power behavior is implicit in digital,” he added. “We need to bridge the gap between the digital and the analog.”

At the very least, analog designers need to understand what happens when they cross power domains on an SoC. Power-saving techniques on the digital side are effective at keeping the chip’s power budget under control and extending battery life, but they aren’t always obvious to analog design teams.

More corners, different approaches
Another thing that’s not obvious to analog teams is the multi-corner, multi-mode approach to verification. Analog circuits have to be extremely precise, but at 40nm and even at 65nm, variability in the process technology removes the guarantees that designs actually can be precise. Instead of having fixed numbers, they have distributions.

“This is not just designing for a few corners anymore,” said Synopsys’ Elhak. “You need to use statistical techniques like Monte Carlo analysis, which means you need to run a lot more simulations. Analog designers also are used to varying width or length in their designs as needed. They can’t do that anymore, because of the complex design rules at 40nm and below. Even basic analysis has to be done with a simulator. What is the gain of a transistor? Equations are based on certain states of devices. It’s different in the sub-threshold region, commonly used for low power, and that’s a lot more complicated than can be handled by equations and spreadsheets; 40nm is the breaking point where analog designers have to start caring about these kinds of issues.”

An obvious workaround for pushing analog down to advanced process nodes is stacking of die, whether it involves an interposer layer or a TSV. Chips already are being developed that can mix technology nodes. But, at least for the moment, it’s still cheaper to put everything on a single die. Where cost is the overriding issue, mainstream planar design at 40nm is still the standard. Where performance and power are more important, stacked die are beginning to offer a viable alternative.