From Tool Agents To Flow Agents

The industry has already demonstrated gains using AI in tight iteration loops, but how does that evolve to cover larger portions of the development flow?

popularity

Experts At The Table: AI is starting to impact several parts of the EDA design and verification flows, but so far these improvements are isolated to single tool or small flows provided by a single company. What is required is a digital twin of the development process itself on which AI can operate. Semiconductor Engineering sat down with a panel of experts to discuss these issues and others, including Johannes Stahl, senior director of product line management for the Systems Design Group at Synopsys; Michael Young, director of product marketing for Cadence; William Wang, founder and CEO of ChipAgents and professor at the University of California, Santa Barbara; Theodore Wilson, a verification expert pushing for this development; and Michael Munsey, vice president of semiconductor industry for Siemens Digital Industries Software. What follows are excerpts from that conversation. Part one of this discussion can be found here. Part two is here.


L-R: Cadence’s Young; Synopsys’ Stahl; Siemens’ Munsey; ChipAgents’ Wang; Theodore Wilson.

SE: What’s the path forward toward a digital twin for the design and development flow? How do we get to the dream of having AI able to do the mundane tasks, or being able to optimize the design and the productivity of development teams? Does the industry need interfaces? Does it need standards? Is this something that EDA companies can handle, or does this need to be developed outside of EDA? What are the next steps?

Munsey: A combination of all of that. EDA vendors cannot provide everything. They can provide tools and interfaces, but it’s a combination of all because customers are trying to create systems that are differentiated from their completion. The EDA vendor will not have all the knowledge to provide what the end customers are trying to do. There will be some in-house development, as well as a combination of the models they are getting from their end customers. It is a combination of in-house development and putting together the methodology and the process, which will be put together by customers.

Young: It comes back to motivation and connecting that to a business model. The business model has to work. Otherwise, it would just fall apart naturally. The challenge is because of different organizations, different companies. There’s non-EDA and there’s EDA. You look at some of the companies out there that have a trillion-dollar market cap, while EDA has a much smaller market cap. I also would argue that not every company wants to share their secret sauce. If Company A can do something much better than Companies B and C, they probably don’t want to share that and put that into a standard. They want to be able to control those outcomes so they can have an edge in the market. In the pure sense, I love the idea because it means there are people thinking like an engineer should. The motivation is very clean, very pure. It’s hard to cut through that layer. How do you make that motivation and the business model drive across the industries?

Stahl: There’s no general answer across the industry. Every company will build their own methods of optimizing the overall flow, because it really depends on the problem space, their teams, team compositions, many different factors. History shows that they are driving the EDA industry with the most pressing problems as part of the overall flow. They put pressure where the cost pressure really manifests itself. As an example, many companies will say, ‘We can go into the cloud. We can expand our compute capacity. That’s not really our problem. Our problem is, how can we help them quickly find that bug in the software? How can I optimize something faster?’ They say to us, ‘I’m willing to spend the money, but help me to find that faster.’ Our customers will tell us where it is pressing. They probably will continue to tell us that we have to optimize the mundane tasks for the individual tools, and maybe across a few tools, like in the area of coverage. But the big optimization loop that exists inside the company they will do that themselves. Maybe they’re asking us to improve the interface from this tool to this other tool, but they will want to build it themselves. They cannot share their differentiation in how to get things to market. It’s really part of the differentiation.

Munsey: The path forward certainly comes down to interfaces and standards. However, developing a standard can take decades. What’s most important is maintaining an open infrastructure — one that allows for experimentation with different tools and methodologies. Ensuring access to data that can be mined for multi-objective optimization is crucial. This ties back to the concept of a digital twin for the product. The decisions being made involve multi-domain solutions. Consider any software-defined product. You’re dealing with software, semiconductors, ECAD for PCB design, wiring harnesses, and mechanical design. It’s one thing to optimize semiconductors, but what about optimizing the relationship between software and semiconductors within a larger product? For example, how does this impact product design, whether it’s hydraulic systems in a car or a plane, or cooling strategies based on PCB layout and overall product structure? These decisions span multiple domains, requiring a solution that integrates all aspects of design to enable true optimization.

SE: One of the problems that has been reported is that every tool vendor puts out error messages slightly differently. They’ve been writing large language models in Perl to try and get the necessary data out of the tool before they can do something. Surely this is low hanging fruit for an area where interfaces and standards would be helpful? Let’s get natural language out of the interface and make it a direct interface to AI agents or tools that can directly work on, and get unambiguous data, rather than looking at a report.

Stahl: You can argue that this is old thinking. This is last-century thinking, when artificial intelligence did not exist and we had to make everything procedural and defined by interfaces. Maybe the large language models are solving these problems because they can look at large data sets. They can come up with their own reasoning in a completely non algorithmic way because they’re learning.

Wilson: This is an important discussion. I agree with the business-related questions about how this gets funded. How do the economics make sense? Large companies do this internally as part of the way that they execute and compete. How would an EDA company be able to displace some of that? Writing a lot of code to parse logs is a very salient problem. A key frustration is that people see issues and they’re distracted, and then that’s gone and we spend time recreating an issue that was in their workspace, but the data is lost. A related issue is compression. We run the same test and the same builds over and over again, so we should not be consuming significant disk space to maintain that rich history. We’re always looking for performance out of the tools. Team members typically will log too much stuff just in case they need it. The EDA vendors have spent enormous effort to give us the fastest possible execution, and we throw it away with a lot of I/O. If we don’t have that I/O, because we send it to a database directly, we don’t need to consume all this disk space and then read it out of the logs. There are some very mechanical plumbing level things that could have enormous impact. They would be a way for the EDA industry to start to play in a concrete way in this merging of tools and this assessment of workflows. Ultimately, everybody who commented about the business realities is exactly correct.

Young: We are still learning from each other, learning from our customers about AI exploration and its applicability to the space. We do see some hot spots that are meaningful to our customers. We’ll continue to work with our customers to figure out how else we can explore. One interesting point is about the mundane data that people look at, and the vast amount of data. If you look at any verification suite and the amount of data that gets pumped out of the tools, no one in their right mind would visually try to view it. It is just not possible. You have to have some filtering, some smart script to get that out. But do you really need them? Do you need to archive them? If no one is looking at it, do they matter? Those are all good questions to figure out how to optimize and compress the data.

Wilson: I casually brush up against these questions, but if we had someone who did database design, they would say these are solved problems. This is an issue of collecting data efficiently and aging it out. It naturally disappears from the database because no one looked at it. A lot of the stuff we are looking at is something that related industries have already covered, and we aren’t using the correct terms of reference.

Munsey: This is really a discussion about structured versus unstructured data. In fact, many companies specialize in converting unstructured data into structured data so it can be better utilized by LLMs. Personally, I’m less interested in formalizing the data coming out of the tools, because that’s not where real innovation happens. Instead, we should leverage existing technology to structure unstructured data while focusing on ensuring all relevant information is accessible to LLMs. That means not keeping certain results or report files internal, but making sure all available information is accessible. Ultimately, that’s what will benefit LLMs and AI platforms the most.

SE: AI is still very new within EDA, and it is fast moving. How big an impact is this on you, as EDA companies? Do you now have to innovate and do things at such a rate that you’ve never seen or had to do before?

Stahl: It is across the industry, and I’m hard pressed to see any customer meeting where we do not talk about AI, and generative AI, in regard to helping their design productivity. These discussions are happening all the time, and the innovation happens along with it. It’s clearly a new era, and it’s fast. There’s no executive on the semiconductor side that does not have a focus on leveraging AI, because the benefit is obvious. But what exactly to do is not obvious yet. We are in that space where the opportunity is huge, but the execution has to be defined.

Wang: I came from an AI background, and I’ve witnessed technologies in AI that change every three to six months. Things from 2018 or 2020 are no longer relevant in 2025. It’s all moving extremely fast. Every quarter, every half a year, there are new models released, and their behavior is fundamentally different, from the AI side of things. The challenge is that chip design, the tape-out time, is usually about two years. So how do you utilize AI to improve this particular vertical? It is not easy, because you can’t just use generic AI and hope you solve some deep problems in functional verification. How do you innovate in this vertical space? I feel like there’s a lot of things that everybody’s learning in the process.

Munsey: This is a major inflection point in the industry, driving rapid change across all aspects of the field. AI and machine learning are now embedded not only in design tools, but also in intelligent agents that can assist with decision-making. At the same time, these advancements are pushing the industry to rethink cross-domain workflows — how to optimize them, mine data effectively, and ultimately improve efficiency. This shift also extends to workforce development, enabling engineers to become more efficient by leveraging insights from cross-domain data. As a result, the landscape of EDA and product design will look very different in the next three to five years than it has at any point in history. Mundane tasks that engineers traditionally have handled will be fully automated and removed from their workflow. This will allow engineers to focus on what they do best — designing and developing products. When I graduated from college, people would ask, ‘How much time do you think you’ll actually spend engineering in your first job?’ Most assumed it would be all their time, but the reality was closer to 10%. We may be approaching a point where engineers and scientists can spend 80% to 90% of their time on actual design, development, and verification, instead of filling spreadsheets, creating PowerPoint slides, or writing documentation just to track mundane information that no one is excited about.

Young: We already have seen the benefit of AI agents to get better PPA results, based on iterations and the learnings. On the verification side, we already have a similar type of capability for finding out the common causes of problems, looking at the databases, looking at the regressions, reducing the number of regression tests needed to provide the same coverage. All these old-school thoughts now can be implemented. I do anticipate that we will go through these improvements over time. It is moving fast. I don’t know what the end game is. But at the same time, it is an exciting time that we live in because we have a lot of knobs to turn.



Leave a Reply


(Note: This name will be displayed publicly)