In a world that favors digital circuitry, analog has increasingly had to cope with processes that have become less favorable to them. But that may be changing
We live in an analog world, and yet digital has become the technology of choice. Mixed-signal solutions that used to contain significant amounts of analog, with just a small amount of digital signal processing, have migrated into systems where the analog to digital conversion happens at the very first opportunity.
There are several reasons for this, and some of them build upon themselves. applies to digital circuitry and not analog. Transistors can be made smaller, and this benefits digital circuitry. But it does not have the same impact for analog transistors. For analog, the characteristics often get worse as the device is made smaller. In a world where miniaturization has been the key to technological advancement, the analog portion cannot keep up and gets left behind.
It should come as no surprise that process technology has been optimized for digital, and this puts increasing pressure on the analog components that are left. Manufacturing process variation and parameter degradation over the product’s lifetime are a lot more challenging in the analog world. This means that a lot more analysis and skillful design is required than for the digital parts.
Analog is still considered to be an art and automation has not migrated into the tools in the same way as digital, meaning that analog productivity continues to fall farther behind. We are beginning to see chips where even the unintended analog content consumes a significant fraction of the SoC surface area and designs where analog is both the long pole and the risky aspect of the schedule.
The irony is that as digital devices get smaller and chips get larger, several aspects of SoC design are beginning to look a lot more like analog problems. Clock and power distribution are quickly becoming analog problems. Chips rely on PHY circuitry to move data around a system and these are analog circuits.
These are just some of the reason why Moore’s Law is failing to deliver on the total area, power and performance gains for chips that cannot go without analog content—and that basically means all chips. The lack of attention to analog is part of the reason why digital chips are now paying a price.
There are no arguments from the industry about the trends. “The leading advanced processes are very much designed for logic density and performance, and as such analog circuits have to live with the restrictions in design rules which this brings,” says Oliver King, CTO at Moortec. “It is also the case that the modeling of these processes is not optimized for analog design.”
Adds Jeff Miller, product marketing manager at Mentor, a Siemens Business: “It’s certainly true that the advanced process nodes at small feature sizes are designed to address the needs of large scale digital logic. Lower voltages, lower power, and lower cost per logic transistor are critical factors enabling Moore’s Law to continue for the digital side. However, speaking for analog design teams, the benefits to going to smaller and smaller feature sizes don’t translate. While there is certainly a lot of analog design going on using the finFET and multi-patterned process nodes at 16nm and below, this is generally to allow big digital and analog to coexist on the same die.”
Process technologies
There are signs that this situation may be changing as Moore’s Law slows down. “The companies creating process technologies have a triad of things that they care about,” says Ric Borges, product marketing manager for TCAD in Synopsys. “Cost is pretty important, and that has to be balanced with performance, power characteristics and reliability. Some applications, such as automotive and medical, are very stringent on reliability, while others are less so.”
Borges points out that there are a lot of analog processes that use larger features sizes. “A lot of people are still fabricating in 180 and 130nm. Within that baseline there may be derivatives that address different power or voltage levels.”
This may require a different way of thinking about the problem. “Higher voltage transistors are often not well optimized in size,” says Mathieu Sureau, director of IC engineering for Microsemi. “In some instances, the foundry may only offer a given voltage breakdown that is much higher than we need, which leave us two options—we take it as is and face a size penalty, or we develop our own devices, which is not optimum as we then need to validate the reliability of it.”
Mixed-signal often has to utilize more modern processes to get the necessary digital density. “We are beginning to see process companies take a digital 28nm process and create derivatives,” says Tom Ferry, vice president of product marketing for Synopsys. “These are targeted towards specific designs that have more analog or power content than the traditional 28nm might have.”
Design rules for analog can contain additional complications. “In digital-focused process nodes, the design rules are primarily there to guarantee manufacturability and yield,” points out Miller. “In analog process technologies, there are often other design rules that capture many of the ‘analog effects’, such as well proximity effects, stress effects (due to STI and the like), and patterning variability effects. These have the net result of making the transistors larger than the minimum manufacturable size, trading off area for precision and/or matching. In other words, analog often emulates larger feature-size processes in the advanced nodes, further reducing the benefits of process scaling for the analog blocks.”
But where problems exist, some see opportunities. “Guardring/latchup guidelines/PDK rules are usually poor or nonexistent for many devices,” points out Sureau. “This leaves room for design teams to possibly get an advantage, or at least a differentiation with the others, in how they overcome the issues in an optimized way.”
Synopsys sees increasing usage of TCAD technology to help foundries optimize and produce derivative process technologies. TCAD takes a physical representation of a transistor and inserts a physical description about how the transistors are fabricated. Then, once the physical structure is defined, that can move into device simulation to analyze the performance. “It can also identify how to modify the fabrication process to be able to realize some device-level or circuit-level characteristic that I want to have in my product,” explains Borges. “This can all be done before any wafers have been created and can significantly narrow down the space to be explored. Then you may need to make some wafer runs to verify that the simulation is correct. That can be done much faster because so much has already been eliminated.”
Battling geometry
As we migrate to finFETs, the digital circuitry is again favored. “Designing a PLL for a 7nm finFET digital design is really hard,” states Ferry. “The analog design is hard. The finFET was mostly designed for digital.”
“FinFET is a tough place to do analog,” confirms Miller. “Designers are limited to a small number of device sizes, the interconnect parasitics tend to be much harder to work around, and there are many more layout-dependent effects that must be accounted for to achieve good matching between devices.”
There may be good news on the horizon as automotive becomes a larger consumer of semiconductors. “Through TCAD, process companies can figure out how well they would work for PLLs and other analog parts,” says Ferry. “As we have more analog content in chips going into automotive, where reliability becomes more important, we will have more flavors because there is a bigger market for them. There are more chips being designed today for automotive than there were five years ago. This makes it worth their while to invest more money so that they can capture more of that business. We need to balance the process to meet both the digital and analog needs of the integrated parts.”
Chips that require larger quantities of analog, and include components such as sensors, power management, integrated and imaging applications, are not on the rush to get to the latest nodes favored by digital. Many of these components require interaction with high voltages, are very sensitive to noise, and benefit from special device types and isolation techniques not available in standard logic processes. “This had led to the rise of the “More than Moore” process nodes that specialize in analog capability,” says Miller. “These technologies are new process flavors, but at larger feature sizes (up to 180nm!), and support for bipolar transistors, high voltage DMOS devices (some capable of handling over 100V!), and buried wells and other isolation strategies that allow high accuracy analog to co-exist with noisy digital. When analog is the critical requirement for a design, we’re seeing a great many customers select these processes.”
Battling it out
Design techniques have been developed to help analog circuitry overcome some of these issues. Examples include post-production calibration and digital assistance of analog circuitry to dynamically adjust to variation. These do not come for free. One example of digital compensation is the pipelined ADC. This has a computation overhead and the latency of digital means that it is slower to compensate than a pure analog implementation and adds to total power consumption.
There may be room to compromise on the technology node as well. “For mixed signal design, where there is substantial digital content but not enough to justify the jump to finFET, we’re seeing a large number of designs targeting 65nm as a nice middle ground,” observes Miller. “This has been especially true for designs that require some RF capability, such as those targeting the IoT edge device market.”
Reliability
Aging models have been developed based on digital circuits, and not a lot of attention has been paid to analog/RF reliability over its lifetime. This may become a larger problem for automotive and medical applications where the lifetime of the product has to be guaranteed. Many analog circuits rely on matching, which means that if two components do not age similarly, then additional problems can get created. This may result in more frequent recalibration becoming necessary which may also lead to more complex design. It may also mean that additional complexity is required in the chip or system if the device cannot reconnect to a tester for calibration.
Smaller geometries have more variability. “Because we are able to simulate the fabrication process in a great amount of detail, we can inject variability into the fabrication flow,” says Borges. “That is becoming a more important trend as scaling continues. Usually there are effects that become more important for analog applications relative to digital, such as device matching. You need a well-tuned process to realize some of the functions.”
Care has to be taken that the models do not create too much pessimism. “It is important is to keep the sources of variation correlated because some sources of variability may actually cancel each other out to some degree,” he says.
Conclusion
The industry’s long-time focus on digital has resulted in analog being squeezed out as much as possible, but analog will always be unavoidable. Today, the answer is to stay on larger nodes when the analog content is important, but additional efforts by the foundries may lead to some better compromises being made that allow digital and analog to be integrated without an unfair bias. Automotive could well be the industry that drives this trend.
Related Stories
Devices Threatened By Analog Content?
With few measurable methods to assess analog quality, it’s not clear how that can impact safety-critical applications.
Analog’s Rising Status
Uptick in demand low power and power-sensitive designs brings new challenges and opportunities.
Can Analog & Digital Get Along Better?
Combining both in a mixed-signal design brings challenges in a different realm. Expertise is the key to success.
Leave a Reply