Why and where limitations are needed in AI-driven design, and where software-defined hardware works best.
Executive Outlook: Semiconductor Engineering sat down with a panel of experts to talk about what’s needed to effectively leverage AI, who benefits from it, and where software-defined hardware works best, with Bill Mullen, Ansys fellow; John Ferguson, senior director of product management at Siemens EDA; Chris Mueth, senior director of new markets and strategic initiatives at Keysight; Albert Zeng, senior engineering group director at Cadence; Anand Thiruvengadam, senior director and head of AI product management at Synopsys. What follows are excerpts of that discussion, which was held in front of a live audience at ESD Alliance 2025. To view part one of this discussion, click here. Part two is here.
SE: How does AI get infused into the design cycle? There’s a lot of discussion about breaking down silos because there are multi-dimensional, multi-physics, and multi-expertise problems to solve, but this is difficult and expensive.
Mullen: AI is going to help in a lot of ways with multi-dimensional problems. If you look at 3D-ICs, the degrees of freedom are going up. There are so many different integration styles, and being able to explore that design space efficiently is going to be one of the key powers of AI. Generative techniques will help, too. For example, we can generate inductors that meet requirements using generative AI techniques. So helping the designer get to the right solution a lot faster with AI is really critical.
Ferguson: This is a big part of what’s driving consolidation within the EDA industry. You see Siemens and Mentor and Altair coming together. You see Synopsys and Ansys coming together. It’s because we have these divergent knowledge sets that we need to bring closer together, and that’s what’s driving this forward.
Mueth: There are a couple areas where AI is getting infused. One is AI assist, where you take a junior engineer and guide them along. That’s one area. Generative AI definitely is an area where AI can help accelerate the implementation. But a lot of what we talk about nowadays is collaborative design, where you bring in multiple disciplines, getting specialists from different areas and putting them together, and also integrating the tools. But even that isn’t closing the workforce gap. What AI possibly can do is empower the individual engineers to do more multi-discipline designs themselves by having those disciplines built into the AI. That would make it much more efficient for each engineer. Will it close the gap totally? No, because the more you enable engineers, the more they want. It’s a self-fulfilling prophecy. There’s so much unmet need out there to do simulations that as soon as you solve one problem, you’re going to you’re going to want to expand and do more.
Thiruvengadam: I look at it in terms of three value tiers. One value tier is the optimization tier, where reinforcement learning-based optimization techniques start to appear. That helps you explore a very large design space and helps you converge on solutions. Another layer is analytics, and that’s an important layer, particularly with generative AI. Analytics are a very key part of that. The third would be generative, and within that you have a system that can help with up-skilling. That’s very important right now, especially for junior engineers getting to the next level and to the next level after that. Upskilling can be very easily done with generative AI, but the bigger benefit is productivity — creating content, creating test benches and RTL and things like that. And that serves as the foundation for truly agentic AI workflows, where you can do start-to-finish complete workflows. And then at the next level is orchestration of multiple workflows. So there’s a lot of potential for AI.
Zeng: On the analytics side, in the early design stage, you can use your multi-phasic AI models to do a lot of exploration to find what’s visible. For floor planning, you might have analyses that only take a couple of seconds to run, but there could be millions of different possible floor plans to explore. That just takes too much time. If you have a pretty good AI model that can capture the trend pretty well, then you can lessen that time to a couple minutes. That removes the barrier of moving multi-phases to earlier in the design cycle. In addition, whatever the results of multi-phase analysis, whether it’s thermal or something else, this all needs to be reflected during the design stage. In the end, it all comes back to the designer to make the design decision of whether I need to move this to here, or I need to reduce my design, or I need to further split it up. So all this feedback needs to be provided to the designer as fast as possible. We cannot take a couple of days or weeks to get that feedback anymore. We need it right away. This is where AI modeling could help.
SE: How do you know you’re getting good information? This is one of the problems with AI, right? There are hallucinations and silent data corruption, and AI itself is a black box.
Ferguson: You need a lot of extra validation. Trust, but verify.
Mullen: And you need independent validation tools that are not using the same approach as the AI or other systems.
Mueth: To build that digital trust, it’s all about good data. You have to ground your AI results against some sort of solid knowledge base. So you need to pay attention to the data sets, to the explicit instructions that you’re giving the AI to make sure it follows the direction you want, while looking for data cleanliness. But in the end, you’re also going to need human intervention to do checks until you can build that trust.
Thiruvengadam: In the ISV space, we are relying on commercial and open-source models. That means the onus is on them, the model provider, to deal with silent data corruption. That will change for EDA when we have domain-specific LLMs (large language models) or SLMs (small language models) that we develop, because now we own the models and we have to train them. But we are not there yet.
Zeng: If these kinds of issues in the input data are seen during the training stage, then you can provide pretty good data. We can develop some ways to check whether this input has been seen before and determine how much confidence we have in it. But, in general, before doing any specific training you need to run a bunch of tests, and even do some transfer learning to make sure your data is pretty good. So there’s some cleaning up that has to be done at the beginning.
Mueth: You could use a constraint-driven agent that looks at the range of expected results and corrals that AI engine into making sure things fall within a reasonable range so you don’t go off into left field. That’s a technique you can use.
SE: Basically you’re tightening up the control loops. But how do you solve the problem where you have inexperienced engineers who don’t recognize something is way off the mark and that it won’t work. The industry has been trying to solve a talent shortage with tools, but it doesn’t necessarily have the talent to be able to understand when the tools are misbehaving.
Zeng: That’s a really good question. My group started a project last year using a copilot to write the code. About 30% to 40% of the total code of that project was written by the copilot, and I was concerned about whether this code was high-enough quality. This really depends on the engineers who are using it, because in the end they have to make sure they fully understand what it does and how they are going to test it. And it’s not just about testing functionality. You also need to do some stress testing to make sure it can work in different stages. That means all the engineers who use the AI tool still need to understand the fundamental physics to be able to make judgment calls. So we can use AI in the design loop, but we cannot let AI be in charge of the design cycle, or as a substitute.
Thiruvengadam: You’re never going to have the actual tools — for example, sign-off tools — out of the room. Ultimately, it has to be the tool that will be used for sign-off, and AI will just help assist around the event.
Mullen: I was at a GitHub event, and they were talking about copilots. The assumption was that junior engineers would gain the most from copilots. But they said it actually was the reverse, because a senior engineer will look closely and say, ‘This doesn’t make sense,’ while junior engineers were most likely just to click and say, ‘Accept.’ So we’ve got to find ways to train these junior engineers, whether they’re doing software or hardware, so they don’t blindly accept what’s generated by the LLM. You have to understand the physics, understand how to verify the results, and ask, ‘Does it make sense?’ So maybe AI can help train them and get them to that level.
Ferguson: We learn from failure. So you might push the button that says, ‘Accept,’ and you’ve got it wrong. But there also needs to be some consequence because you got it wrong so the next time you’re not going to just say, ‘Accept.’ You’re going to dig in a little bit more. That’s what happens, and it’s going to continue. We’ve all made mistakes in our early careers.
SE: Let’s shift direction. There’s been a lot of talk about software-defined hardware. Is that really a viable path forward, or will it be more co-design? We’ve been talking about co-design since the late ’90s, with hardware and software. Now it’s co-design of hardware, software, package, and various other elements. Is that really software-defined, or is it something different?
Ferguson: It depends on the application. An iPhone is a good example of something that can take advantage of software-defined hardware, where you say, ‘I’m going to frame my hardware in such a way that I’ve got various redundancies. I’ve got extra things that I don’t necessarily need for the current application, but I might need it in the future for some application I haven’t considered yet, or that I’m just thinking about now. I’m going to put it all together, so instead of having to have a whole new phone, I can do a software update.’ Not every application in our industry needs something like that. Data centers are probably a good example. They need to make sure they’re running everything as efficiently as possible while minimizing the impacts. Automotive, maybe less so. You probably need some of it for certain parts of the system, but you don’t need an extensive amount of that.
Mueth: It’s application-specific, for sure. Other examples would be software-defined radios, or software-defined instruments, where you have a hardware shell, and the personality of what you’re testing is largely software. You gain flexibility and future-proofing, which is really important. That’s why you’re able to use an iPhone for five or six years. It also allows bug fixes and all kinds of things that you couldn’t do with a hardware-defined software application.
Mullen: Imec recently put out a research paper on this, and they justified it by saying that the AI models are changing so rapidly that the hardware cycles, which are getting faster, still can’t keep up with or anticipate what the next AI models are going to look like. So there’s a real demand for some kind of reconfigurability that is faster and more power efficient. How do you do training or inference in the most power efficient way? There’s interesting research there.
SE: That’s always been one of the big tradeoffs, right? As you add flexibility, you lose speed and efficiency.
Mullen: It has to be done intelligently. It’s never going to anticipate everything.
Zeng: With simulation, it’s more like a hardware-defined software because we’re focusing more on the performance. But as we start to see 3D-ICs, we may be able to swap different chiplets and blocks very easily. And so maybe for certain types of analysis, like memory bonding, we might have more bandwidth than computing units. But that requires the ecosystem to be there.
SE: If you look at the number of software updates on your phone every single day, it’s astounding. So now think about that in a car.
Thiruvengadam: Software-defined hardware has been around for about 15 years. In a sense, it goes all the way back to FPGAs. But software-defined hardware is here to stay. You can extract gates based on use cases and applications with software. Development, cost, time to market — they are all driving factors, and those don’t go away. So it’s going to be more and more software-driven hardware.
SE: Is a software-defined device less secure or more secure?
Thiruvengadam: As long as you build in the right security mechanisms, it shouldn’t matter whether it’s software or hardware.
SE: One last question. There has been a lot of talk about federated learning over the years that incorporates EDA. Is this real, or is it wishful thinking?
Ferguson: The biggest concern is IP protection. You’re talking about bringing in information from everywhere, which is difficult to do. If you think you’ve got some unique IP, you don’t want to share it with somebody else and they don’t want to share theirs with you. The only way to get confederation is to bring everybody into the same tent.
Mueth: On a big system it makes sense. For an automobile, which has a bunch of components, you could take the components and build a reduced-order model, put that into next-level assembly, and the data is coming from all different sources. If you have a hybrid twin, you take some measured data, simulate data, integrate that all together, and then pull that up into the higher-level assembly. In fact, you need that because the kinds of simulations automobile suppliers want to do today can’t be done without it. There was a big push to introduce surrogate modeling to allow those kinds of simulations to take place.
Zeng: Some of our customers were looking at this, building their own AI models ahead of us because a couple years ago, when AI began booming, their managers gave them a KPI for introducing AI into their design flows. They were the first ones to look at using AI to replace simulation. I asked them, ‘Okay, in the end, are you really using this in your workflow?’ The answer was no. The reason is that for whatever model they’re building, it takes a long time to extract the simulation data. And second, you have to run a lot of simulation a really long time, which is way beyond the design cycle. But from the EDA point of view, I see great potential. We are able to embed this kind of model generation, data collection, into our flow and create a model for them, which they can use in the design.
Thiruvengadam: With classic AI, federated learning would be a very, very difficult thing to pull off. Say there is a customer with a model they want to update using specific components from the EDA vendors. That creates a big concern from a linkage point of view for the EDA companies, where a customer is requesting models from different vendors to tweak their LLM. That would be a ‘no’ for the EDA vendors.
Leave a Reply