AI is accelerating the need for 3D-ICs and digital twins, and causing lots of disruption along the way.
Artificial intelligence is permeating the entire semiconductor ecosystem, forcing fundamental changes in AI chips, the design tools used to create them, and the methodologies used to ensure they will work reliably.
This is a global race that will redefine nearly every domain over the next decade. In presentations and interviews over the past several months, top EDA executives converged on three major trends that will become focal points for the foreseeable future:
The use cases for AI in EDA have evolved significantly from simple pattern recognition to assisted design and broad knowledge sharing, enabling junior engineers to get up to speed more quickly and veteran engineers to expand into new areas and be more productive.
“The way we’re describing it, you have a co-pilot and you have ‘assisted’ and you have ‘creative,’ said Sassine Ghazi, president and CEO of Synopsys. “[Assistive is] where you have a workflow assistant, a knowledge assistant, a debug assistant, so that you can ramp up a junior engineer in a much faster way, as well as an expert engineer. They can interface with our product in a more modernized, efficient way. Then you have the creative element. We have a number of examples where we have early customer engagement from RTL generation, testbench generation, test assertions, where you can have a co-pilot that helps you create part of your RTL, testbench documentation, test assertion.”
Ghazi said that with creative tools, various tasks could shrink from days to minutes. But all of that needs to be tightly controlled. “We cannot have models that hallucinate,” he said. “We are very deliberate about when and how we engage with our customers to make sure that the maturity of what we are offering is acceptable, without putting any part of their workflow at risk. As AI continues to evolve, so will the workflow. I often get asked the question from our investor stakeholders on when do we see a change in EDA as a market by leveraging AI. I don’t believe that will be the case unless their workflow changes, meaning you can do certain things very differently in order to deliver a product roadmap in a faster, more effective, more efficient way. Now, with the agentic AI era, agent engineers will collaborate with the human engineer in order to take that complexity and change the workflow.”
Providing AI lives up to its promise, the magnitude of this change cannot be overstated. But all of this is evolving so quickly that it’s creating a lot of questions for which there are no clear answers today. Time will tell what AI can do well, what needs to be closely monitored, and where the risks are. Predictions of AI’s transformative capabilities have proved over-optimistic since late 1950s. But over the last couple years, starting with the rollout of ChatGPT, it finally seems to be living up to its promise. Agentic AI is the next big target.
“Agentic AI is an interesting concept, if we are confident enough to trust a level of autonomy for the AI to often make active decisions on its own,” said Mike Ellow, CEO of Siemens Digital Industries Software. “In the design space, as we start talking about agent AI versus agentic AI, it depends on who you’re talking to and how they want to slice the different terms. With agent AI, from our perspective, we define a task with a set of boundary conditions and then let AI operate within that box in order to drive to a solution. When you go to agentic AI, basically you’re saying, ‘Here’s the problem. You think about what is the best way to execute it. You come up with that solution. And you go drive toward the desire outcome from an EDA perspective.’ This is how we look at the evolution of AI into our tools.”
The rate of adoption is unprecedented. It’s hard to find an area in chip design where AI doesn’t play at least some role, whether directly or indirectly.
“It’s a profound change for our industry,” said Niels Faché, vice president and general manager of Keysight EDA Software. “As you move through the simulation domain and the physical domain, in all of these steps there is data generated and there are insights to be gathered. This is where artificial intelligence can be of great help. When we think about it from a simulation perspective, it can help us with modeling. It can help us speed up simulations. It can help us add more expertise to the products to help design teams. It can help designers to generate designs. It’s very powerful and transformative. Years ago, a customer was telling me, ‘I have a design team, but they spend a lot of time dealing with your products to simulate. I really want them to be designers, not simulators.’ The product design teams don’t want to learn how simulators work. They don’t want to spend a lot of time setting up a simulation. They want to think about requirements, and how they can create a design from those requirements. This is the power of AI. It can really shift customers from how things are done to what they really want to do. AI can help you with routing on a printed circuit board or to come up with new topologies.”
Anirudh Devgan, president and CEO of Cadence Design Systems, compares AI to one layer in a three-layer layer cake in EDA. “For applications to be successful, they need to have all three parts,” he said. “There are the AI agents and orchestration. There is the principal simulation, and sometimes people forget how important that is. It’s the real transistor behavior, the molecular behavior, the fluid dynamics, the thermal. There is no substitute for that. And then you have the compute to run it on.”
Looked at from a computational standpoint, AI seems more evolutionary than revolutionary. “When it first started, AI was a dense computation,” said Devgan. “But the physical world is not dense, and AI is not dense either. All the neural networks are not dense. So what happens is that because of the complexity of the designs and the requirements, we innovate in all these algorithms. Then you have latency and partitioning and your hierarchy. You can see some of that in the top in moving layers of abstraction, and some in the bottom in the pure algorithms. Half of these algorithms are Boolean — zero/one, like logic simulation, formal verification — and a lot of them are numerical, like circuit simulation, characterization, thermal. You need acceleration for both Boolean and numerical computation…But now, with AI, we can do the next level of innovation in software, both from a natural language standpoint and from an optimization standpoint. AI is very good at optimization, and that’s why we have worked on it for several years. We have five major AI platforms — digital, verification, custom, packaging, and system analysis. And the results are actually quite phenomenal…AI can completely transform chip design, and that’s why we’re investing heavily in these five platforms.”
How EDA vendors utilize AI varies somewhat, depending on the starting point. But one of the advantages of LLMs is their ability to span different data types, basically raising the level of abstraction across the entire flow. So the starting point is less important than it might seem at first glance. The key is how they obtain and utilize the data, and how far they can extend that beyond just the design phase — something that will depend heavily on how data is shared and protected in the future, and where companies see opportunities.
As Siemens’ Ellow explained, “It’s starting out in the generative AI space as the foundation, because that’s the first level where you’ve got a bunch of verified data sources from us and their customer data. That goes into a data lake, where you train an LLM. We have the ability to use various LLMs, or customers can use their own. Then you can snap it into the infrastructure and use all of our tools on top of it.”
Given the breadth of greenfield opportunities, picking the right opportunities is daunting. This is why the initial starting point needs to be relevant. “How long does it take for someone to become an RF engineer? It’s an art,” said Keysight’s Faché. “It’s not something you pick up in six months. It often takes years. But with AI, we can make information more readily available and accessible for designers, and we can really close the gap between the productivity of a very experienced RF designer and a novice engineer. Chatbots are one example of a way to provide information at the right time in a better form to customers.”
AI requires a lot of data, particularly for training models. The problem is that planar chips are unable to process all that data quickly and efficiently due to reticle limitations, which is why many data centers utilize some sort of multi-die assembly — fan-outs or 2.5D — to boost  performance and reduce power as compared to a planar SoC.
Fig. 1: 3D-IC conceptual model. Source: Intel Foundry/Semiconductor Engineering
But those are largely incremental gains. Achieving orders of magnitude improvements in performance and power requires real 3D-ICs, hybrid bonding, and arrays of chiplets.
“With 2.5D, you’ve got more flexibility as to how you structure the silicon, such that it’s more pliable for the software that it’s running,” said Ellow. “When you get to full 3D-IC, your ability to partition more discretely across different die — and to take advantage of different processes for optimization in that process for different functionality — offers more interesting possibilities for how that better fits the software workloads. It’s intended to optimize. But it’s still a little bit aspirational.”
Two key challenges are how to manage thermal dissipation and how to ensure the different layers can be bonded together properly so that densely packed interconnects line up perfectly. This is compounded by the fact that some chiplets in a 3D-IC will be developed using the most advanced processes, while others may be developed at mature processes.
“Customers are already talking about trillions of transistors, bringing them together in one package, while their schedule is a race to go from 18 months to tape-out down to 16 months, 12 months, or below to deliver customized silicon for these intelligent systems,” said Ghazi. “How do you deal with that? The complexity from the technology on a single die — we’re talking about GAA, angstroms in order to design that silicon — and then you bring it together in an advanced package.”
This is the only way to scale to hundreds of billions or trillions of transistors, but putting it together will be an engineering feat. “The moment you start scaling to that level of complexity, you can only achieve the performance or power by being efficient at the interconnect level,” Ghazi said. “And dies may be coming from different process technology and different foundries. How do you verify and validate an architecture in order to deliver this advanced package?”
That also opens the door for both soft IP and hard IP in the form of chiplets. “There are new opportunities, especially with 3D-IC and AI buildup, so we are doubling down on our IP investment,” said Devgan. “The number of IPs we are offering, and the number of process nodes we are offering, has increased significantly. IP now is one of the biggest R&D teams in Cadence, and we will continue to invest in not just physical IP and interface IP, but also in Tensilica, which is widely used as an embedded processor.”
EDA vendors are acutely aware of the risks of AI systems and the unknowns in a 3D-IC architecture.
“There is a need to connect design and test and have a digital threat in the verification from design to test,” said Faché. “There are tools to manage quality and reliability. And then, of course, there is the growing need for data management and analytics. All these tools need to be present, and customers need to move from homegrown tools to commercial tools. But these tools also are interdependent, so there’s a need for a hub and spoke model, where the different tools can act on the data. The data will evolve over time. They may consume data. They may generate data. IP is developed during the product lifecycle. And so we need a backboard infrastructure for customers to be able to do that if we want to have a true digital transformation. That process of data management is crucial.”
This concept is still evolving, and so is the terminology. Some vendors call it a digital twin, while others call it a virtual twin. But the basic concept is real-time monitoring of a system to ensure it’s behaving as expected, optimizing it wherever possible based upon the workload, and taking action if something needs to be fixed before it becomes a problem.
“There is a massive need for accurate digital twins, especially in physical and the corresponding silicon that’s going to drive that,” Devgan said. “This is why we’ve invested in digital twins for the data center — for simulating the entire data center. This is a non-traditional product, but it has become super important — applying CFD and simulation and AI to optimize the data center. We even applied it internally to our own data centers. The thing to remember about data centers is that it’s not only these big cloud companies. There are a lot of data centers in the enterprise. When we applied it to our own data centers we got 10% better power, which is huge because there is not much science applied to the design of data centers. There’s an immense amount of science applied to the design of the chip, the rack, and all the networking. But how do you place the data center? How much pooling do you use? Is it over-cooled or under-cooled? Do you maintain it regularly. That is not done with as much science as the designing of the chip. So once you have a digital trend of the data center, you can do a lot more optimization.”
This concept is well understood. But despite some early successes, discrete digital twins are still a work in progress. “Digital twins is not a monetized system yet,” said Ellow. “It’s a collection of pieces. It’s got elements of software, semiconductor, package, and board through the electronics. But then there’s the electrical effects, where we had as part of Mentor things like wire harnesses and network connectivity. You’ve got the mechanical piece, and a much more expansive portfolio on the multi-physics associated with that. Then you’ve got product lifecycle management, because all these things have to be built with a bill of materials. The completeness of all of those individual domains from one company allows for some insights. There are still gaps when you bring the whole portfolio into manufacturing process simulations and things like that. How do you set all of this up for actual production of the end product, where all these systems get integrated? It’s a lot to unpack.”
Progress is being made, but it’s not a push-button, one-size-fits-all solution. “As an industry we’ve been talking about it for a while, but it’s essential given the complexity of how to simulate in real-time and analyze and optimize at the system level,” said Ghazi, pointing to applications such as data centers and automotive as key markets for this technology. “As we start engaging deeper with the complexity of automotive and autonomous driving, the digital twin needs to model both the electronics and the surrounding environment. In the case of automotive, we have to partner with the ecosystem. They have other parts that need to come in with the chip virtualization and electronic systems.”
Case in point: Synopsys’ partnership with IPG, which makes automotive simulation technology. “We were able to virtualize and model the control system and the zonal and compute ECU to communicate with each other,” Ghazi said. “We provided the electronics virtualization and IPG brought in the surrounding physical world. During the execution of the software development, the testing team can observe the behavior of that silicon into the environment for the specific workload they are building. And that does not only apply to cars. Drones, data centers, etc., all benefit from that virtualization. And if we bring it closer to silicon, a 3D-IC or advanced package is a sophisticated, complex system where you need to take into account not only the electronic design. You can argue the electronic design in this case is understood. But the moment you start stacking chiplets into this advanced package, you’re dealing with a whole other slew of challenges, be it thermal, mechanical, fluid, structure.”
The AI revolution has begun. As with all new technology, ironing out the inconsistencies and identifying problems will require years of work by the entire tech ecosystem.
“There are a number of challenges,” said Faché. “First of all, you have to become AI savvy, whether you’re a designer, part of a design team, or a provider of tools. Learning AI is a whole new field. That means we need to really understand what the changes to an engineering lifecycle look like. It’s a fundamental redesign of the engineering lifecycle. They need to understand how models are developed in an AI world. What does a machine learning operational workflow look like? There’s a cost associated with building models. You need to have the resources, the process and data management fabric in place, to make that happen. And ultimately, you need to gain confidence that these tools work. This is a race for everyone to embrace AI.”
Leave a Reply