Can Analog Make A Comeback?

The industry reached an inflection point where analog is getting a fresh look, but digital will not cede ground readily.

popularity

We live in an analog world dominated by digital processing, but that could change. Domain specificity, and the desire for greater levels of optimization, may provide analog compute with some significant advantages — and the possibility of a comeback.

For the last four decades, the advantages of digital scaling and flexibility have pushed the dividing line between analog and digital closer to the periphery. Today, those conversions are usually done in, or very close to the sensors and actuators. Communications always has been a holdout because channels, be they wired or wireless, do not acquiesce to the demands of digital.

Fig. 1: Heathkit analog computer from the 1960s. Source: Wikimedia Commons

Fig. 1: Heathkit analog computer from the 1960s. Source: Wikimedia Commons

But there are several significant changes on the horizon, including:

  • Chip scaling is slowing or stopping in an economic sense, meaning that future gains from digital scaling are no longer assured. This is one of the main drivers for domain-specific architectures.
  • Domain specificity means that the value of flexibility has been reduced, which was a negative for analog in the past.
  • Reticle limits mean that many systems will become multi-die, and each die does not have to be implemented in the same technology node. That may make older, cheaper nodes available for analog.
  • AI inferencing is heavily dependent on multiply/accumulate operations, which are highly efficient in analog.
  • Approximate computing is likely to become more prevalent.
  • Latency is becoming a more important performance requirement.

“The world is analog, so the circuits will be,” says Benjamin Prautsch, group manager of advanced mixed-signal automation at Fraunhofer IIS’ Engineering of Adaptive Systems Division. “There are classes of IP that benefit significantly from both digital assistance and full digital replacement. However, the benefit needs to be investigated at the system level because the conversion between analog and digital creates limits. A clever analog circuitry might outclass a medium one that uses digital assistance, but there are many factors, and performance measures, that come into play.”

In addition to sensors and actuators, wireless communication is becoming more important. “Everything used to be wired,” says Marc Swinnen, director of product marketing at Ansys. “Today, every IoT device needs a wireless connection. They are using radio communications, and this is creating an increasing amount of analog and RF content. In addition, when you look at digital signal frequencies — they have been creeping up. 5GHz is a sort of magic number where inductance becomes a significant player, even at the chip level. Then, electromagnetic effects have to be taken into account. Those digital signals are looking awfully analog if you want to analyze them properly. This is an even bigger problem when you look at the 2.5D and 3D structures, where you have very high-speed wires that go significant distances, as far as the chip is concerned.”

Fabrication advances
With every new node, the performance characteristics of the digital circuitry has improved. Area goes down, performance goes up, power goes down, capacitance goes down. However, the same is not true for analog. Each new node is usually associated with a voltage reduction, which hurts analog because it decreases the noise margin. Variation hits analog circuits much more than digital. FinFETs create limitations for analog. The list goes on.

This has led to analog having to make compromises. “If you’re fabricating everything on a single die, let’s say 12nm, then analog needs to move to the same process node,” says Sumit Vishwakarma, product manager at Siemens EDA. “You are forced to lose analog performance. As the performance of analog starts to deteriorate at lower technology nodes, it needs assistance. That is why we are seeing an influx of digital-assisted analog design.”

When analog and digital are decoupled, and a suitable technology utilized, the analog circuits are not hamstrung. “We can design analog circuits that provide equivalent, or even better, functions in some cases than digital, and we can do it in older nodes,” says Tim Vang, vice president of marketing and applications for Semtech’s Signal Integrity Solutions Group. “The cost can be lower because we don’t need all the digital functions, so the die sizes can be smaller. We can reduce power because we don’t have as much functionality.”

Analog also can utilize more manufacturing technologies. “There is a limit to how much you can get out of any process node, even in analog,” adds Vang. “We can do things even in 65nm if you want to use CMOS. We also use other processes like BiCMOS, or silicon germanium. They even can be better-suited for interfacing with optics. Optics often likes to have signaling represented as a current, rather than a voltage, and bipolar is very good at driving those currents.”

And with chiplets gaining more attention, making these technology decisions adds a lot more flexibility. “A chiplet approach, or a heterogeneous approach to integrating logic or capabilities makes a lot of sense,” says Tim Vehling senior vice president of product and business development at Mythic. “We could in theory remain on 40nm or 28nm for the analog computation part. Then you could match that up with a digital chiplet that has processors and memory and I/Os, and that could be in 10nm. And they could be integrated into a single package or a single stacked architecture. With the advent of chiplets, analog has a long life.”

It also creates advantages for optics. “In the IEEE, and other standards groups, they use words like co-packaged optics or onboard optics, and it’s all about bringing the optical interconnects closer to the switches and the CPUs,” says Vang. “This is primarily to save power used to drive across the board to the optics that would sit at the chassis front. These are the pluggable modules that are used today. That power-burn at high speeds is enough where they’ve been pushing to bring the analog optics closer and closer to the digital switches on the board. We see that as a big opportunity, and effectively they will function like chiplets with optical I/Os to the world.”

Latency is a performance metric that presents difficulties for digital. “We run our analog engines at a fraction of the speed of a digital engine,” says Vehling. “We run in the megahertz range, not the gigahertz range. Digital architectures struggle with latency because of data movement. With an analog solution, the weights are stationary, the compute is inside the element itself. From a latency point of view, we’re faster than a digital architecture, even at that megahertz range.”

This has significant benefit for communications systems. “The signal basically has time of flight through the chip,” says Vang. “There’s no A-to-D conversion, digital processing, then D-to-A at the other end. The solution is essentially zero latency, or near zero latency. If you’re talking about an interconnect from New York to Los Angeles, that latency isn’t so important, but if you’re trying to go a few meters within a data center, that latency saving is significant. For supercomputer users, analog has some unique advantages: cost, power, and latency.”

Changing requirements from AI
The digital world is very exact, predictable, and deterministic. Those requirements have worked against analog, but that is changing. “With AI, accuracy is dependent on the models,” says Vehling. “Depending on which model they choose, the accuracy will change. If you choose a bigger model, it will have better accuracy. Smaller models have lower accuracy. If you choose different precision, you’ll have different accuracy. If you choose different resolution, your accuracy will change. If you have a different data set or it is trained differently, your accuracy will change. We see people who will prune a model because they want to make it fit better. If you prune it, you reduce the accuracy. There are many ways where the accuracy of a model for a given application can vary in a digital system — maybe not the same way that it varies in an analog system, but there’s definitely variability today. There are a wide range of variants in AI model accuracy in any situation, let alone digital versus analog.”

At the heart of any AI system are the multiply/accumulate functions (see figure 2). “The amount of energy consumed by performing these MAC operations is humongous,” says Siemens’ Vishwakarma. “Part of the reason is that neural networks have weights, and these weights need to sit in memory. They have to keep accessing the memory, which is a very energy-consuming task. If you compare the power of compute versus data transfer, it’s almost 1/10 of it. To solve this problem, companies and university researchers are looking into analog computing, which stores the weights in flash memory. In-memory computing is a common term used where the weights are stored in the memory. Now I just have to feed in some input and get an output, which is basically a multiplication of those weights with my inputs.”

Fig. 2: Analog circuits implement the MAC function. Source: Mythic

Fig. 2: Analog circuits implement the MAC function. Source: Mythic

Other architectural tradeoffs can be made. “You see spiking neural networks being used to detect time-based change, and then things can be deployed in combination,” says Vehling. “You’ll see maybe a spiking neural network being deployed on a sensor level to detect a change or a movement. Once that detection occurs, you shift to a more detailed, or more precise, model to identify the object. So you’re already starting to see the tiered approach of deploying AI into the industry.”

But there are obstacles. “In principle, an all-analog solution should be much more power-efficient,” says Mo Faisal, president and CEO of Movellus. “But it isn’t easy to achieve the promise of analog efficiency in a mixed design that is predominately digital. For most companies, analog is challenging because it doesn’t scale with smaller geometries and is disappointing in yield, performance, and scalability. However, analog continues to show promise and potential for the select few.”

Mixing means converters. “When you want to plug analog infrastructure into a digital world, you need converters,” says Vishwakarma. “You need DACs at the input, and you need ADCs at the output. So that’s how the connection comes in where the analog will fit into the digital world, because we need analog only to solve that compute-intensive MAC operation. But the rest of the world is digital.”

This is where the system-level tradeoffs have to be considered. “An optimized analog core can significantly reduce power consumption and throughput, but it will need those conversions,” says Fraunhofer’s Prautsch. “Whether or not the conversion diminishes the benefit of the analog replacement is a system-level decision that needs to be analyzed by means of modeling and optimization of the concept.”

Analog in a digital world
Could the problem of converters be overcome? “If you could make digital work in an analog world, we would be significantly more efficient than the other way around,” says Vehling. “Instead of having an analog compute engine that’s buried in a digital processor, it would be ideal if we could actually have a native analog processor taking native analog signals off the sensors as opposed to converted. This would provide significant improvements in power efficiency and performance and latency.”

Not everyone is convinced this is the right direction. “Moving compute-related functions to the analog domain can absolutely deliver excellent performance and energy efficiency,” says Scott Hanson, CTO and founder of Ambiq. “Several innovative startup companies have shown outstanding results and have developed expertise here. However, the inherent challenges of analog compute (e.g., poor node scalability, long design times, inflexibility across different compute problems, etc.) make this a domain where only a few very specialized experts can be successful.”

Instead, Hanson looks at the continued improvements coming in implementation technologies, and the fact that migrating designs across the nodes is fairly easy. “There are also other complementary technologies like sub-threshold and near-threshold computing. The combination of process node scaling and sub-threshold and near-threshold computing provides huge headroom for exciting new AI functions, all without the complexities of analog-based compute. In short, we’re betting on digital compute.”

Analog training
For analog to become the dominant engine for AI, it has to penetrate training as well as inferencing. “If you could take the raw signal from your lidar sensor, your radar, your CMOS image sensor, and rather than convert it to digital and then back, take the raw inputs and feed those into an analog array, the gains would be tremendous,” says Vehling. “But you’d have to train the system to recognize data in an analog fashion. That is the future of analog computers, a true analog system. In the meantime, we’re handicapped or constrained a little bit with having to fit into the digital architecture.”

There are challenges that would have to be overcome. “One of the challenges with analog signals is that there is no limit to how an analog signal is presented,” says Siemens’ Vishwakarma. “AI is good at recognizing patterns, but we cannot just give it a continuous signal. It needs to be made discrete and quantized. To train a model, you are iteratively updating the weights until it settles to the weights that will be used for inference. Then I can keep the weights in a non-volatile memory. However, we cannot change the value of the weights in analog, the value of the resistors in the flash memory. Once you load them, it is there. If you need to change the weights, you need random access memory like a DRAM, and that is where we have the problem.”

Conclusion
There are things that analog can do better than digital, but the big problem is how to integrate them such that they produce the desired gains at the system level. However, the potential decoupling, provided by heterogeneous implementation technologies for each sub-system might make it easier for analog compute to be considered for an increasing number of functions. Then they could provide superior performance at lower cost points.

If analog compute does become more commonplace, new memory technologies may well be researched and developed and that would enable analog AI. It could provide orders of magnitude gains. Or to misquote Mark Twain, “The demise of analog is greatly exaggerated.”



3 comments

Kevin Cameron says:

If Verilog is the language of digital, Verilog-AMS is the language of neural networks.

SriniB says:

Thank you for an excellent summary

Merritt says:

Thank you, very nice summary and though provoking.

Leave a Reply


(Note: This name will be displayed publicly)