The Art Of LP Analog

Getting the optimum power scenario for the analog portion of an SoC requires a deep understanding of all use modes.


The best way to reduce power in analog chips is to make architectural changes or adopt a new architecture for the individual block.

However, there are also some design techniques used to reduce power in analog circuitry. Unlike digital circuitry, which allows an engineer to leverage a low power library and optimize through a constraints file with the EDA software to reduce power, the same does not hold true for analog circuitry.

“With analog you really have to go back to first principles,” explained Navraj Nandra, senior director of marketing for the DesignWare Analog and MSIP Solutions Group at Synopsys. “One of the techniques people use in a high-speed SerDes or a PCI-Express, for example, is to change the design of the transmitter because the transmitter uses between 60% to 70% of the overall power of the SerDes, so that would be an area to focus on.”

Eric Naviasky, Cadence Fellow noted there are certain general principles that are shared between analog and digital. Turning off what’s not being used is the biggest. “Surprisingly, that one took a while to catch on in the analog world, but it’s now solidly there. Digital has a lot of different choices. Analog has fewer choices. In digital, you can stop the clock or you can go to dynamic voltage and frequency scaling, so it’s not exactly off but light sleep/deep sleep/slumber/medically induced coma. With analog you can usually cut it back, but most analog circuits don’t like their power supplies messed with and unless you design for it, don’t mess with the clock too much. Usually on the analog side, the biggest changes that people are making is trying to design it so that it wakes up faster, if you do shut it off, because it’s foolish to go through a millisecond of getting yourself ready to operate for a microsecond and then go back to sleep.

Techniques such as dynamic voltage and frequency scaling don’t translate well to the analog world because the supply is a critical part of the design. “You can design toward certain limits of the supply, but if the supply drops significantly you have no way to tell whether a circuit actually works or not,” said Hao Fang, Software Engineering Group director at Cadence.

Further, what engineering teams have done is to change the architecture of the transmitter from a current-mode design to a voltage-mode design that essentially adds a series of impedences into the voltage line so power consumption of a transmitter is reduced by 30% to 40%, Nandra said. He added that the disadvantage to doing that is the voltage-mode transmitter is sensitive to supply noise. If there is noise from the supply, the transmitter is going to pick it up more than, a current-mode design, which is actually quite robust in terms of supply noise. “What people do then is add a voltage regulator on a voltage mode output stage in a transmitter, and that will help to improve the power supply reduction ratio with the transmitter. You solve the problem by getting the power down, then you solve the other problem of the noise sensitivity of that block by using another block.”

Especially with a high speed SerDes or I/O, this technique works quite well to move to the voltage mode to save power consumption up to about 6GHz, he continued. “For a USB 3 or a PCI-Express Gen 2, this technique works quite well. But we found that if we tried to up the speed to PCI Express Gen 3 or USB 3.1, up to 10 or 12Gbps, the voltage mode design started to break down. We were seeing very poor signal swings. This is because you are relying on an amplitude that really suffers from high-speed operation. In the voltage mode, you don’t really have the ability to control the launch amplitude like you do with the current mode design,” Nandra explained. “Since the power savings advantage of voltage mode was desired along with high speeds, a hybrid technique is now used. We use a hybrid mode transmitter which, as the name implies, is a combination of current mode and voltage mode, so you get the power savings of voltage mode with the robustness and speed of current mode.”

At a lower level, he pointed out some interesting techniques that are starting to crop up. One is using power gating and power switches. “What you do with a power switch is you’ve got the ability to add another sort of circuitry, which is like a big switch, and you connect that into various points in your analog/mixed-signal (A/MS) block programmed in a certain way in order to bring down certain islands of IP that aren’t functioning when they don’t need to. What you are doing is intelligently switching things on and off based on when they need to be used.”

Something being used more now at lower geometries is power gating—a technique also used in digital design—whereby power gate switches have been added into A/MS blocks. This allows high leakage parts to be isolated.

Karthik Srinivasan, a principal applications engineer at Ansys-Apache, agreed that from the digital side there are only a few techniques that are available and commonly used in order to reduce the power, such as power gating, clock gating, voltage islands or dynamic voltage scaling. However, he noted the flexibility to make architectural changes depending on the performance requirements like gain and bandwidth requirements. “If power is super critical, for example, for bio-medical instruments that have to operate for extended periods of time on a battery power may be super critical for those circuits and those need not operate in the 2GHz range. They can operate even with a few kilohertz. The bandwidth there is not critical so they can make a lot of architectural changes in order to ensure the power reduction on the analog side.”

One of the main things often seen is decoupling of the power and the ground plates on the die. So the power delivery network is usually completely isolated on the die, but it does share the same substrate, Srinivasan explained. “There’s no way to get around that. Then there is substrate level noise coupling. Whenever you introduce a power-gating technique or a clock-gating technique, what you are eventually doing is having an extended idle period with a burst of activity coming in. That can induce a significant amount of noise through the substrate or maybe through the package and board depending on what level these supplies are connected. That itself can change the biasing circuitry for critical analog circuitry. If it does impact the biasing of course it is going to impact the functionality of the circuitry, and it is going to impact the signal to noise ratios of critical analog circuitry.”

Analog is unique in that the gain/bandwidth requirements are very different from a digital circuitry.

“If you take a digital circuit, for example, there are three components of power: switching, dynamic and leakage power in a typical CMOS circuit, and the equation that’s pretty obvious is that if you reduce the supply voltage, the switching power is going to reduce quadratically. But when it comes to analog circuits, usually that’s not the case. If you reduce the supply voltage, it might directly impact the voltage swing of the amplifiers, for example. It might directly impact the gain of the amplifier. That’s why, depending on the criticality of the circuitry, some circuits can even operate at sub-threshold conduction, they may not really have the linearity requirement that is needed for high speed/high performance analog circuitry,” he pointed out.

On the other hand, looking at high speed/high performance analog, that kind of tradeoff can’t be made operating the circuit on sub-threshold. “What you can do is have the high-performance, critical analog circuitry at a higher supply voltage, but try to digitize the analog components as much as possible. For example, some of the circuitry is compensations, or calibration circuitry and analog can be digitized. People tend to take the liberty of innovating and try to come up with more ways to have more predictability into the analog front end by digitizing some of the circuitry. That way the supply voltage can go to 0.8 or 0.7 volts for these digital circuits. Adding more digital circuitry gives more flexibility in terms of design but it also gives some overhead because analog designers typically tend to rely on SPICE, and any analysis is done completely in SPICE. But once the circuitry starts getting digitized, the SPICE will have a tough time running a large digital plus analog together,” Srinivasan added.

As a result, co-simulation techniques have been adopted by customers, so EDA vendors provide a co-simulation environment of using a behavioral model for digital plus analog detailed transistor models. Some of them are good in terms of verifying the functionality, but it doesn’t cover the noise injection from the digital to analog, which is where engineering teams do tend to look for co-analysis that has both digital and analog together to do a co-simulation but at the IP level as well as the full-chip SoC level.

Avoiding problems
Obviously, knowledge of the previous design and past mistakes help the designer to avoid problems from the outset of a design. On top of that, doing module-level checks and also integration checks such as when a decision is being made early on, require qualitative simulation data backing up such a decision.

“That’s where we are seeing designers moving onto early simulation checks, or even early floor-plan level checks or substrate noise coupling checks,” Naviasky said. “The best thing you can do is to start off by understanding every mode you’re going to operate in, and what the power needs are going to be for each one because once you make decisions, it’s really hard to change things afterward. In many cases, if a block has to operate at two different speeds that are very disparate, for analog I may put two different blocks in because that way I can optimize one for the low and one for the high. It costs me a little bit of area, but that way I can optimize because it’s very hard to change the power that’s consumed—at least anywhere near the ratio that the speed will change. As such, you plan ahead, you build in extra versions of things, and you switch between them.”

There are other options, too. “In some cases, you can do Class B circuitry. The lowest-power designs are starting to do Class B analog. What that means is when a stage is moving, it uses more power than when it’s sitting, so that’s not controlled by the outside, it’s just controlled by the nature of the stage. If you have a slower signal, you move less, so you take a little less power. That sounds like all-goodness. The not-so-goodness is that many signal sources—for instance, a data stream—might have a section of really fast data, and a section of really slow data. If your drawn power supply current changes between those, it can turn into a power supply variation. You have to then deal with the risk that your power is no longer constant and is giving you grief.”

At the end of the day, many of the designs today are a combination of analog and digital, regardless of the application space being targeted, said Krishna Balachandran, product marketing director for low power at Cadence. When it comes to these mixed-signal designs, somehow the low power for the whole chip has to be done in the context of both. “Your design flow has to work with both analog and digital. One of the challenges is how to make this work in a mixed signal flow. EDA has to step up. There has to be the right modeling from a power intent perspective for the SoC. If you don’t do all that right, you’re gong to have some pretty fundamental problems on the interfaces between the analog and digital world. That is going to cause a whole slew of problems, and [engineering teams] are facing that today as they are putting their chips together.”

Leave a Reply

(Note: This name will be displayed publicly)