Optimizing Analog For Power At Advanced Nodes

Getting the optimal power scenario for analog/mixed-signal content requires a combination of new philosophies and the right choice of architecture.

popularity

As any engineering manager will tell you, analog and digital engineers seem like they could be from different planets. While this has changed somewhat over time, is still something of a mystery to many in SoC design teams. Throw power management into the mix and things really get interesting.

Improvements in analog/mixed-signal tools have helped. Eric Naviasky, a fellow and one of the founders of the analog/mixed-signal design group at Cadence, observed that tools in this space are much faster than in the past and can relieve a fair amount of the drudgery associated with analog design.

“The thing that makes analog a black art is the fact that you have to sit there and work it out on a whiteboard or a piece of paper,” Naviaskiy said. “You still have to play with the math and tools really haven’t changed that part of it. It’s not like they gave us more understanding. They just gave us relief from some of the drudge work.”

Advanced technology nodes pose a unique challenge for analog designs. The variability of the threshold voltages and the discrete width availability for the devices in these advanced CMOS processes makes it difficult for analog designs, Arvind Shanmugavel, director of application engineering at ANSYS/Apache pointed out. “The reduced supply voltage for advanced tech nodes also reduces the available operating range for analog devices. Yet, circuit designers come up with innovative techniques to design low power analog IPs.”

Along these lines, Navraj Nandra, senior director of marketing for the DesignWare Analog and Mixed-Signal IP Solutions Group at Synopsys, explained that in terms of analog content, a lot of the power savings come from architectural choices and not necessarily the technology node. “What we’ve seen is that if you are able to change to an architecture that consumes lower power, that can be applied to almost all technology nodes. We’ve seen that in finFET technology. We’ve been able to reduce power consumption of quite a few of our analog/mixed-signal IPs, and we’ve taken that same architecture and back-ported it to 28nm, with about the same power advantages of the new architecture. So, the technology node didn’t really make much of a difference. The real question is, what can you do architecturally to help in reducing the power consumption?”

For any high speed interface or any high speed PHY, quite a bit of the power consumption is coming from the transmitter, so Nandra recommends focusing on the architecture of the transmitter for low power operation. Here, there are three techniques: voltage mode transmitter, current mode transmitter, and hybrid mode transmitter. “The voltage mode transmitter consumes the least power, so it’s the best one to use if possible. However, there are limitations in that it doesn’t work well at high speed and it’s susceptible to power supply noise. Current mode transmitter consumes more power but it’s less susceptible to power supply noise and it works very well with very high-speed design. So anything above 8 Gbits/sec, current mode transmitter is a good architecture but unfortunately it consumes too much power. That was the thinking for many years until we developed a hybrid architecture for these high-speed interfaces. The hybrid will go into the voltage mode at lower speeds and then switch into the current mode at higher speeds, so it leverages both the power advantages and high-speed capabilities. That’s one analog technique we’ve used to reduce power consumption.”

He pointed out that typically in a mixed-signal, high-speed interface, the transmitter consumes about 70% of the overall power consumption, so if you get the power consumption down for that piece, you help reduce power for the whole mixed-signal block.

Another interesting technique, Nandra continued, is using L1 sub-states, which is becoming common in something like a PCI-Express both on the analog/mixed-signal block and also on the digital block. The L1 sub-states describe the tradeoff between wake up time for the block and being off. If the block is off it shouldn’t be consuming power, and with a big mixed-signal block, not all of the blocks are going to be active all of the time. But can you power off blocks? The trick is that if something is off you have to switch it on. That switch-on time is called a wake up time (latency), and it becomes a design challenge because you’ve got to figure out a way of switching on the circuit quickly under some event that requires its operation. “We’re seeing this implemented now in application processor-type chips where you’ve got an analog/mixed-signal PCIe PHY with a PCI-Express controller, and the customer is doing power management on their SoC by switching various blocks on and off. The key consideration is to make sure that the block is on when you want to do something.”

And then there is the issue of IP integration, Shanmugavel said. “Integration of analog IPs within an SoC has also become challenging for advanced technology nodes. We find more and more IPs that share the power domains or ground domains with other components on the SoC. In such cases, one needs to carefully plan for power/ground noise coupling between the analog and digital components. For sensitive analog IPs that share the same substrate with digital IPs, one needs to also add proper isolation techniques to mitigate the noise injected through the substrate.”

Further, analog IP providers also are faced with a unique challenge to design the IP for multiple foundries. The verification for power-noise and reliability needs to be performed and proper models need to be released to their end customers. These models also need to capture proper sign-off threshold requirements before they get used within the SoC environment. Power busing for analog IPs, in particular, needs to be designed with noise coupling, electromigration (EM) and electrostatic discharge (ESD) in mind, he added.

Power becomes a philosophical issue
On the whole, when considering power management of analog/mixed-signal content at advanced nodes, there are new philosophies arising. “Power is one of the things that we’re sort of struggling with in the advanced nodes,” said Naviasky. “Advanced nodes brought a whole host of new and interesting challenges. Power is one of them. The fact is that many of our favorite tricks and techniques were essentially removed at the newer nodes, and there is additional economic impact because the newer nodes are so expensive in terms of per square micron. Charts I’ve seen suggest it’s been sort of plateauing out. It’s gotten better so far as gates. Gates cost a lot less than they used to cost at 40nm. We can argue it costs less than it did at 20nm. But analog circuitry is about square microns, so per square micron it’s been getting worse. It’s a simultaneous thing of how to work in the new world and, by the way, we want to reduce the power because this digital guy did and you can’t use as many square microns.”

Complicating matters is the fact that there are two pieces in analog that absolutely do not scale—matching and noise. “However, lots of other things — if you’re clever enough — do scale. That’s why, if you look at analog, it has been shrinking and it has been dropping in power, not at the same rate as the digital has, but we’re definitely getting somewhere with it,” Naviasky said.

As such, it is paramount to realize that from an architectural perspective, there must be a change in philosophy to understand power management for analog/mixed-signal content at advanced nodes. “In the past, you simultaneously optimized for matching and distortion and for every analog parameter you wanted along with speed and power. And now, the best, most useful philosophy seems to be that your analog path is designed to be as small, as fast, and as low power as you can possibly make it in that process. Then you will ray around that fast, low power path essentially, little bits of correction and adaptive behavior and auto-calibration and everything else that you can in order to put back in the accuracy that you removed without getting it into the path because then it would need power,” Naviasky suggested.

As might be expected, a new philosophy requires some new skills. “There’s definitely a new skill mix and it probably goes beyond the design engineer. The designer may know a simple trick, but most of the old analog guys probably will have to go hit the books for a little while to figure out some of the correlation tricks needed to fix distortion or some of the frequency translation tricks to fix some of the noise issues. There are new pieces or old pieces they have to rediscover,” he said.

In addition, because there is now digital circuitry that is essential to make the analog circuitry work, the verification is much different than it used to be. “Now, instead of just having an analog engineer playing with it and pronouncing ‘done,’ at some point, you probably need to bring in a team of digital guys to write the RTL for the calibration machine or the adaptation machine and you probably need digital verification people to make sure the thing can’t go wander off into a hole and get stuck,” he added.

To address all of these issues, there will be multiple schools of thought.

“There’s one school of thought which is brute force and there are really good, incredibly fast simulators out there right now and they have wrapped around them really good regression engines that let you test thousands of different possible combinations in relatively short runs. Brute force is not a ridiculous philosophy. There’s also the philosophical approach and the analytic approach. The philosophical one is we decide what kind of risks we want to take and maybe we add another layer of abstraction around things to help us deal with that. Then there’s a formal approach, where you attempt to convince yourself that you have bounded everything that could possibly be taken care of, and you don’t have to look at everything inside of the bounds, you only have to know you solved the perimeter. That means there will be a whole lot of Ph.D. theses coming out on stuff like this,” he concluded.



Leave a Reply


(Note: This name will be displayed publicly)