Analog: Avoid Or Embrace?

Data converters are required whenever you move between the analog and digital domains, and they present both challenges and opportunities.

popularity

We live in an analog world, but digital processing has proven quicker, cheaper and easier.

Moving digital data around is only possible while the physics of wires can be safely abstracted away enough to provide reliable communications. As soon as a signal passes off-chip, the analog domain reasserts control for modern systems. Each of those transitions requires a data converter.

The usage of data converters is somewhat obvious when talking about sensors and actuators. But they also are found in a lot of other places, such as RF signals for wireless communications, embedded in the SerDes that provide wireline communications, on-chip from PVT monitors, and even simple ones are used between voltage domains.

Newer technologies present increasing challenges for data converters, while emerging applications, such as the array of sensors for autonomous driving, may cause some established practices to be reconsidered. Even artificial intelligence (AI) is considering how data converters and analog computing could be used instead of the power-hungry digital multiply/accumulate functions.

Wide variety of needs
Data converters, like so many other basic components, are architected and built for very exacting demands. “The amount of accuracy necessary — and the performance required — for the power that you can tolerate will determine what the system can do,” says Art Schaldenbrand, senior product manager at Cadence. “For an automotive LiDAR system, the data converter may well be the bottleneck.”

At the same time, automotive video feeds may not push the limits. “For this application, garden-variety analog to digital converters (ADC) with lower resolution may be sufficient,” says Mick Tegethoff, director of product management and marketing for Mentor, a Siemens Business. “The types of ADC where we see the biggest pain are the ones used in wireless communications where you need to get an analog signal into the transceiver. This is one of the large challenges for 5G or the ecosystem around automotive.”

ADCs are everywhere. “If you consider communications within a system and you are using wireline communications, you might have an ADC integrated inside the SerDes,” adds Cadence’s Schaldenbrand. “In that case you will be using an advanced node, and the ADC will be built in the same process as the one that the digital circuitry is using.”

The industry is seeing a push for higher-performance, lower-power solutions. “This requires implementation in smaller technology nodes,” says Joao Marques, director and engineering design centre manager for Adesto Technologies. “This means lower supply voltages, which creates significant challenges for high-performance analog design. Digital blocks can work faster in smaller technology nodes while keeping or even increasing their performance, but the analog blocks’ performance are directly related to the voltage headroom, limited by the supply rails.”

Pushing the speed limit
Serial communications pushes a lot of limits. “A lot of analog content is associated with getting data on and off chip,” says Bob Lefferts, a Synopsys fellow. “It could be a PCI express PHY with a decision feedback equalizer and auto-calibration. They are going fast enough that it gets hard to literally move the data from one place through the medium (which is FR4 and absorbs the signal at high frequency much faster than low frequency signals.) The IP has become a lot more sophisticated with calibration. It has regulators that are often built-in because if you are sending data at 20X the rate of the data on the chip. Transmission is serial rather than parallel, so you are susceptible to jitter, which is often induced by power spikes. You want regulators that you can trust and keep low noise circuits so that they don’t pick up a lot of power supply noise generated by the digital, which doesn’t care. That includes a lot of analog content, including op-amps in the regulators.”

When the analog signal arrives at the destination, the digital content has to be recovered. “There is no ADC by itself that can satisfy the requirements of the design,” says Schaldenbrand. “Consider a 112Gb/s SerDes. For proper sampling, you have to be at least at the Nyquist rate, which is twice the transmission frequency. So if you look at those designs, they typically are using technologies called time-interleaving. So I don’t have one ADC, I have many, and they work in parallel to get the necessary performance. That puts extra demands on calibration and making sure that each of them behaves the same, so you don’t see digital errors.”

Pushing the technology limit
Many designs continue to follow Moore’s law. “The decreasing size of the process nodes causes a lot of problems for analog designers,” says Schaldenbrand. “That has been getting worse because of the number of parasitics that it takes to be included in a model to make it predictive. This results in much slower simulation times. The problem has become a lot worse.”

But analog designers are creative. “Converter technology has been evolving to address this issue,” says Adesto’s Marques. “One such solution is digital-assisted calibration. Instead of trying to push analog performance beyond the limits of the technology, the solution is to design a lower-performance analog block, which is then enhanced by digital-assisted calibration. Such a solution can significantly improve state-of-the-art efficiency and can even allow reuse of old architectures that previously were limited to low-performance applications only.”

These nodes enable a lot of digital processing in a very small area. “We often rely on digital techniques to assist analog,” says Sunil Bhardwaj, senior director for Rambus’ India design center. “We use it to deal with variation, to deal with a variety of configurations, to deal with calibration and training – which is best done in digital. Designs are becoming mixed-signal in this respect, and a lot of analog has digital components to calibrate analog and to improve their robustness.”

The downside with these technologies is that you need to run even more simulations. “When you try to run the simulation of these circuits, that are large from a SPICE perspective, and then you add all of the parasitics, all of the Rs and Cs, it has a significant impact on the performance,” says Mentor’s Tegethoff. “You end up with huge netlists and very long simulations. Then there is an additional problem. If you want to look at ADC noise, you have to go into the frequency domain. You have to look at the power spectral density. That is another class of challenges.”

Schaldenbrand agrees. “You want to know frequency domain information, but the only way to get that is to do a time domain simulation and then do an FFT. Those are long simulations that must have high accuracy. The accuracy has to be higher than that of the data converter, so if you are doing a 10-bit ADC, you might be able to use relatively loose tolerances for simulation. But if you are working on 16 or 20 bits, that means the simulation tolerances become a lot tighter. So there is an acceleration of the complexity factor. The models are more complex, the operation is more complex, and the accuracy demands are greater, which means that simulation time becomes an issue.”

Noise
The transition to finFETs made noise a significant issue because of self-heating. “You need to include the impact of things like device noise, which is the random noise on the devices themselves,” says Tegethoff. “When looking at high accuracy, high bit-count ADCs, quantization noise is a well-known phenomenon. As you quantize the data, it introduces error. Architectures and techniques exist to minimize this quantization noise. In 7nm and finer geometries, the device noise becomes a more stringent limiter on the accuracy and the noise performance of the circuit. They can manage the noise through their design techniques, but if they do not understand what the device noise is doing, they will not have the accuracy or the noise performance they require.”

Thermal increasingly is becoming an issue for many designs. “Both IoT and automotive applications require high accuracy temperature sensing,” says Oliver King, CTO for Moortec. “In the IoT case, it is to minimize self-heating and wasting power. In the automotive case, it is to reduce operational temperature, leakage and therefore increase device reliability and operational lifetime of the product.”

Traditionally, SPICE has not dealt with thermal impacts. “It is an expensive simulation to run because you run one simulation to calculate what the temperature rise will be, and then a second simulation to actually include that temperature rise in the simulation,” says Schaldenbrand. “You might have two transistors that are identically sized. One is placed as a power control switch, so it is not very active, and another transistor is driving an off-chip load at high frequency. Their temperatures are going to be very different.”


Fig 1: Transient noise versus silicon measurement. Source: Mentor, a Siemens Business

This is an area where tools are still developing. “We see the need for electro-thermal simulation in automotive,” says Tegethoff. “That is a way to do an electrical simulation with SPICE while running a co-simulation with a thermal solver. That is working out the thermal conditions around the circuit and in the transistors themselves at every time step and updating that information on a per device basis. So if you have an automotive module that has a large driver transistor that is handling a lot of power, not only do you need to worry about the power dissipation of the device itself, but also its proximity to high-precision circuitry where the power that turns into heat in the neighboring devices becomes a concern. This situation happens a lot in automotive, where you may have devices driving servos or other actuators that take a lot of power.”

Reliability and aging
Automotive adds other design requirements. “Another pressure being placed on converters is coming from emerging markets like AI and self-driving automotive,” says Marques. “For many of these applications failure is not an option since human lives can be in danger. Reliability, robustness and durability are key parameters during design and fabrication. Foundries set more restricted technology rules to improve reliability, which makes the design more challenging. The design must include redundancy, self-testing and large margins to ensure accurate performance over an extended range of environmental conditions.”

“Automotive companies want to perform SPICE simulations of the behavior of the circuit after 10 years of field operation,” says Tegethoff. “Modeling and flows are emerging for this. You simulate the circuit under new conditions and then run an analysis to get the power and current bias on the devices. Then you need a reliability model that says if you run X number of years with this kind of bias, this kind of current, the parameters of the device will degrade in this manner. Then for each device, you degrade the model and you rerun that simulation and see the impact of degradation on the system.”

The compact modeling coalition (CMC) within Si2 has released a standard approach to do this reliability modeling. CMC is helping to drive this modeling and analysis for different foundries and processes.

Test and control
Test is another interesting problem when dealing with converters. “Typically, you could stick it on an analog tester and look at the stuff coming out, but the tester also has to be fast,” says Synopsys’ Lefferts. “For example, we have to be able to calibrate the drivers for process, temperature variation so they maintain a tight tolerance to 50 ohm termination. That is so the signal integrity doesn’t degrade, because the transmit rates are fast enough that you get reflections if you do not match the 50 ohms. That calibration requires some digital circuitry. We build in an ADC, which is part of the IP and is shared across all of the ports for calibration of the termination. We use that ADC to make analog measurements on signals within the design. So if we have a regulator that is supposed to be generating a copy of the core supply, we can measure the voltage, convert it and read it out through the digital scan. This becomes part of the built-in self test.”

Calibration is important. “Sensors may have been inserted into the chip, such as temperature, process corner or performance sensors,” says Rambus’ Bhardwaj. “They are analog designs and need to be calibrated. A lot of designs have digital assist that allows them to calibrate and handle the variation.”

The more accurate the sensors, the more you can use them. “By understanding and accurately measuring thermal, supply and process conditions deep within semiconductor devices you are able to control and therefore reduce overall power consumption,” says Stephen Crosher, CEO for Moortec. “In addition, the use of higher accuracy supply and temperature sensors allow for tighter voltage and thermal guard-banding, which means that you can increase the utilization of cores within a chip for given power and temperature conditions.”

The new frontier
AI may be one of the latest development areas, but quite a number of people are looking toward analog for the best solutions. “Inferencing can be done in analog,” says Youbok Lee, senior technical staff engineer at Microchip Technology. “Analog-dot-product is one example using analog filters and op-amps. You can compare two signals or mix them, and you can make a decision from the results. There are many cases where analog computation is much faster than digital computation. Keep in mind our brain is dealing with all analog signals.”

At the heart of most AI algorithms is the multiply/accumulate function. “We are talking about hundreds of multipliers operating together to do these huge matrix operations,” says Gideon Intrater, CTO at Adesto. “You potentially could take each 8-bits and run those through a D2A and do the computation in an analog fashion, where you just use Kirchhoff’s Law to do the multiply. It is not as exact or accurate as doing it in a digital manner, but for most cases it is good enough. And by doing that, vendors claim significantly faster operation and lower power.”

Ironically, today, the solution is to take data from an analog sensor, convert it into digital and then convert it back to do the analog function. This is because analog memory doesn’t exist. “The very leading edge is to store the bits in memory as analog values,” adds Intrater. “Then you use the resistance of the non-volatile memory as the value that is stored in a weight, and drive current through that and use that to do the multiplication. This appears to be quite promising. These are really in-memory processors.”

Conclusion
The technology associated with data converters is evolving quite rapidly. The demands are changing, newer process nodes are adding additional complications, and any increase in accuracy that can be obtained can be used to improve total chip performance and power profiles.

Calibration is the only way those accuracy levels can be obtained, and automotive is pushing these circuits to be self testable while in operation — something that has not yet been attained. But AI could be the latest comeback for analog, because in many applications the resolution of the calculations do not have to be that exact.



Leave a Reply


(Note: This name will be displayed publicly)