Mixed Messages For Mixed-Signal

Analog and mixed signal content is adding risk to ASIC designs. Pessimists see the problem getting worse, while optimists point to AI and chiplets for relief.

popularity

Several years ago, analog and mixed signal (AMS) content hit a wall. Its contribution to first-time chip failure doubled, and there is no evidence that anything has improved dramatically since then. Some see that the problem is likely to get worse due to issues associated with advanced nodes, while others see hope for improvement coming from AI or chiplets.


Fig. 1: Cause of ASIC respins. Source: Siemens EDA

Analog and mixed-signal design has grown significantly more complex due to several converging trends. “The integration of digital assist logic into analog blocks and power management ICs has improved performance and adaptability, but introduced tight digital-analog co-design requirements,” says Rozalia Beica, field CTO for packaging technologies at Rapidus Design Solutions. “This shift demands hybrid verification environments that can handle both domains effectively. At advanced process nodes, increased variability and layout-dependent effects have made analog behavior more unpredictable, requiring broader simulation coverage and significantly higher compute resources. Additionally, AMS IPs are now deeply embedded within larger SoCs, such as AI accelerators, RF transceivers, and sensor interfaces. That is making hierarchical and system-level verification indispensable.”

Some specific technologies are contributing to this trend. “Driven by the demands of AI hardware and data-centric compute, architectures, like AI factories, have dramatically heightened the verification challenge,” says Karthik Koneru, principal product manager at Synopsys. “At the core of this demand is high-bandwidth memory (HBM) technology, which features a stack of DRAM dies and a logic die with mixed-signal circuits, such as PHYs, enabling the massive data movement essential for high-bandwidth applications. These circuits bring together deeply intertwined analog and digital domains, making verification both broader in scope and more mission-critical.”

New nodes add to the challenges. “Verification times have increased with every new technology node,” says Björn Zeugmann, group manager for integrated sensor electronics at Fraunhofer IIS’ Engineering of Adaptive Systems Division. “This is caused by new design rules, designs getting more complex and the schematic is becoming heavily influenced by the layout. Parasitic effects are growing and verifying the extracted netlist becomes more and more important.”

It is not just high-performance computing feeling the strain. “Data rates are getting much higher,” says Chris Mueth, new opportunities business manager at Keysight. “For analog and radio frequency (RF) functionality, the frequency and the bandwidth are both higher and wider. This makes it more difficult to characterize in simulation and more difficult to test because everything’s more sensitive. For 6G, which are sub-terahertz frequencies, they become tricky to model and simulate properly, and also very tricky to test properly. In addition, it’s not uncommon for an analog RF chip to have 1,000 requirements, where you have basic functional modes and you have different performances that need to be characterized.”

New transistor devices add to the uncertainty. “Another layer of complexity is added with the newer advanced node technologies incorporating finFET and GAAFET devices,” says Mahmoud ElBanna, general manager for Mixel in Egypt. “This introduces more complex device models and more unpredictable interconnect parasitics, causing an increase of over two-fold in the netlist size. These increase verification times significantly.”

Unlike digital logic, analog behavior is highly sensitive to parasitics, layout dependent effects (LDE) and process variation, making it difficult to simulate accurately. “As a result, SoCs with AMS content typically have first-time success rates 10% to 15% lower than their digital-only counterparts,” says Rapidus’ Beica. “This gap is often due to insufficient corner-case coverage, inadequate modeling, or integration issues like power domain conflicts and substrate noise. Redesign cycles for analog IPs are particularly costly and time-consuming, especially when they involve layout modifications or device resizing. Analog bugs are harder to detect before tape-out and more expensive to fix post-silicon, increasing both risk and development time.”

Smaller nodes compound the effects. “We never used to be worried about noise, coupling noise or even variation in your analog design ” says Sathish Balasubramanian, head of product management and marketing for AMS at Siemens EDA. “Now we need to start worrying about that. The signal integrity challenges that you’re having on the digital side have now spread into the analog portion of the design. Analog used to have big margins and was slow compared to digital, but that’s not the case anymore. Teams are unable to get precise performance or accuracy for the analog portions of the designs — especially for anything related to communication — on the SerDes channel or the clock generation. They are seeing a big difference in the intended performance and the actual performance, and they are trying to figure out if that’s related to variation or just bad design.”

Verification challenges
In the past, the analog content was verified in isolation and then integrated into the digital content. “In addition, we used to have guard rails,” says Siemens’ Balasubramanian. “That enabled us to insulate it from everything else. Today, there are no guard rails. You’re designing it on the same substrate as the digital, which is at an advanced node, and in some cases you are stacking dies on top of each other.”

Getting the required performance out of analog often requires assistance from digital circuitry. “As digitally assisted analog systems become the norm, co-design of analog and digital blocks must be verified with high level of accuracy,” says Mixel’s ElBanna. “Complex calibration algorithms and tight coupling between the two domains demands a verification strategy that doesn’t just test functionality, but anticipates intricate cross-domain interactions and corner-case effects.”

As complexity grows, the size of the verification suite also grows. “Regression suites now encompass thousands of tests, requiring not just functional correctness but also high accuracy across process corners, noise conditions, and timing scenarios,” says Karthik Koneru, principal product manager for circuit simulation at Synopsys. “The challenge is acute, and you need the precision of analog verification without compromising the speed required for digital-scale regression.”

Parasitics and layout effects call for more detailed simulation models. “Simulation times are rising because detailed netlists are needed to bring simulation results as close to reality as possible,” says Fraunhofer’s Zeugmann. “In order to achieve that, the models will first be proven by measurements of the silicon. Separating the analog from the digital by using different technology nodes can lessen this problem when the performance of an older node is sufficient for the targeted analog performance.”

Simulation performance is generally solved using abstraction. “We are seeing people try to abstract into a higher level of abstractions,” says Balasubramanian. “They are keeping only the necessary portion in true transistor, and most of the design on the event-driven digital simulator. The problem with doing these abstractions is how do you verify your abstractions are a correct representation of the actual design? When someone says, ‘I have created a model for this analog block,’ you need to be able to verify the analog block is suiting the purpose for running the verification.”

Other abstractions are possible. “The adoption of real-number models (RNMs) and support for mixed digital verification methodologies, like UVM, are no longer optional,” says Synopsys’ Koneru. “They have become essential for scaling verification and enabling reuse. While several companies offer model generation tools for mixed-signal verification, innovation in on-the-fly automatic model generation has remained elusive. What is required are tools that can generate models at different levels of abstraction and allow users to select based on accuracy and performance tradeoffs.”

The right mix of abstractions has to be found. “More extensive regression runs that focus on digital-analog interactions can be sped-up by using simpler digital language models of the analog blocks within digital-based simulations,” says ElBanna. “The slowest, and most accurate, type of simulations would be SPICE-based simulations that use the full netlist of the entire design at the cost of longer simulation time. The verification experts have to trade off between accurately simulating multiple scenarios and corners, and verification run-times.”

Abstractions tie in with the need to shift left. “It is crucial, especially for the more advanced nodes, to get the layout effects incorporated as early as possible,” says Benjamin Prautsch, group manager for advanced mixed-signal automation in Fraunhofer IIS’ Engineering of Adaptive Systems Division. “This is to get an idea about how much degradation to expect from the idealized schematic. It is not enough to rely on the gut feeling of the design expert. They may be experts in finding a new topology, but if a critical parasitic is significantly different from expectations, the behavior of the transistors could be critically impacted. The parasitics have become a significant portion of the actual design, and this link must be closed as fast as possible.”

Functional verification cannot be done in isolation anymore. “What we have been advocating is to have design engineers and test engineers get together at the very front end of the process,” says Keysight’s Mueth. “Once the requirements are handed down, you should create a verification matrix that has each different phase of verification. For example, it could be simulation, it could be wafer test, it could be package test. There are things you can’t physically test that you need to simulate. It could be because you don’t have access to test points on the chip, or it could be beyond the range of measurement equipment. There are other things where it doesn’t make sense to simulate because it might take too long, or it’s not feasible because you don’t have the right models. It’s just easier to test. But between silicon validation engineers or test engineers and the design engineers, they should be able to put their heads together at the beginning of the process, after they’ve been given the requirements, and come up with this matrix to determine what gets tested where.”

Impact of chiplets
It is not yet clear if chiplets will help mitigate some of the issues, or if the added problems will be even greater. “It is not necessary to fabricate the analog components in the same technology node as the digital part,” says Zeugmann. “Separating the analog into a separate die can increase the yield. Fabricating analog IP in older nodes and bringing them together with a chiplet approach helps, but also shifts the verification challenge to a higher level. Verifying the chiplet system needs system-level testbenches bringing both worlds, analog and digital, and also the interconnect models together.”

There are certainly some big advantages. “By allowing analog IP to remain on mature, well-characterized nodes, such as 65nm or 180nm, chiplets reduce variability and simplify verification,” says Beica. “This approach also supports IP reuse, lowering design risk and shortening time-to-market. However, chiplet-based systems introduce their own complexities. Mixed-node verification, cross-die timing, and analog signal integrity must be carefully managed. Interconnect modeling must account for analog effects like insertion loss and noise coupling, while thermal and power noise from adjacent high-power digital chiplets require special attention.”

Problems can remain hidden when the true extent of the problem is not fully understood. “With 3D integration we are essentially creating another dimension to the problem,” says Balasubramanian. “You need to take into account the thermal effects. How do you keep sensitive components immune from thermal variations when you’re stacking it up? It is a big floor-planning and physical problem, and tools are needed to address that. The second thing we see is stress. People don’t even know how stress in stacking really impacts performance. And how do you handle it? How do you measure it? How do you model it? How do you design so that you don’t worry about it?”

Many things become more complicated. “From a packaging perspective, advanced platforms such as 2.5D/3D integration, fan-out, and redistribution line interposers introduce new challenges,” says Beica. “Analog blocks are vulnerable to power integrity issues, thermal gradients, and inter-die crosstalk, all of which can degrade performance. System-in-package (SiP) designs combine RF, analog, and digital components, which further complicates verification, requiring multi-physics simulations that account for electromagnetic interference, thermal behavior, and signal integrity.”

Could AI be the savior?
There is a lot of optimism for the potential impact of various AI technologies. “Artificial intelligence is beginning to play a transformative role in AMS verification,” says Beica. “Machine learning models can learn from past simulation data to improve coverage efficiency and generate high-impact corner cases with fewer runs. Deep learning techniques enable anomaly detection, helping to uncover elusive bugs. AI can also predict parasitic effects and layout-dependent variations more accurately, accelerating the design cycle and enabling faster prototyping. Despite these advantages, challenges remain in acquiring high-quality training data, building reliable models, and integrating AI into accuracy-critical analog workflows.”

Helping to speed up debug would be worth its weight in gold. “Mixed-signal failures are notoriously difficult to isolate, and this is where AI holds immense promise,” says Koneru. “By automating waveform analysis, identifying anomalies, and accelerating root-cause detection, AI can significantly streamline the most time-consuming parts of the verification cycle.”

And use simulation time more effectively. “The sheer volume of regression runs required to validate these scenarios is growing fast, and AI is emerging as a key enabler,” says ElBanna. “This ranges from intelligently pruning redundant test-lists to mining past regression data for unseen coverage gaps, anomalies, or patterns in failures. AI can play an important role even in modeling complex analog behaviors for faster, more accurate simulations, and help in the increasingly complex tasks of mixed-signal verification, enhancing the ability to achieve first-time success.”

Model generation may be highly valuable. “Traditional modeling frameworks, where you have support models or things that are standardized in industry, may slowly give way to machine learning, machine training, or ANN models,” says Mueth. “That’s not to say this is the nirvana for everything, because a traditional modeling framework does tell you a little more about the physics, but machine learning can provide a model where no traditional model currently exists, or the accuracy is off.”


Fig. 2: How AI could impact mixed-signal development. Source: Keysight

While the entire semiconductor industry is facing a skills shortage, analog takes a career to master. “We are looking to augment the analog workforce, to immediately have AI assistants next to them,” says Balasubramanian. “Like a knowledge assistant that can help them solve anything. You start by explaining what a PLL is. Then slowly start doing reference designs. There’s a lot of ways there we can really help the workforce.”

Conclusion
Analog circuitry may only take up a small fraction of the die area, but it exists because digital circuitry is not capable of performing that function. The semiconductor industry is driven by digital requirements, and that has been making it increasingly difficult for analog to perform — while at the same time the performance demands on analog continue to grow. It is perhaps no wonder that analog failures are increasing.

There have been many attempts to automate aspects of the analog flow, but it essentially remains a manual process that has to start from square one with every new manufacturing node. Perhaps chiplets will allow analog to remain on nodes that are more amenable to its requirements and increase the longevity of the IP, but the technology may have to become a little more mature before it can be attempted for analog.

There is growing optimism that AI could be highly valuable to analog design and verification. “In an AI world, you could have very unorthodox topologies that aren’t part of the engineering curriculum or recognizable, but nonetheless innovative,” says Mueth. “There’s a lot of progression and changes in the workflow as the promise of AI comes to fruition.”



Leave a Reply


(Note: This name will be displayed publicly)