A multi-faceted approach is required to deal with growing complexity and a shortage of engineers.
EDA vendors are taking aim at new ways to improve the productivity of design and verification engineers, who are struggling to keep pace with exponential increases in chip complexity in tight time-to-market windows and with constrained engineering talent pipelines.
In the past, progress often was as straightforward as improving algorithms or parallelizing computations in a linear flow. But with the latest generations of leading-edge chips, much has changed. Multi-die integration requires multi-physics analyses earlier in the design flow, and changes that are made in one part of a design can have a profound impact elsewhere in an SoC or package, or even in the field. The challenge now is to keep pace with a flood of competing design elements in a systematic way, and that requires a mix of advances for existing tools and methodologies, the inclusion of innovative new technologies, and in many cases a different way of approaching problems.
“There are opportunities to improve the productivity of tools and productivity of designers,” observed Amit Gupta, vice president and general manager, Custom IC Division at Siemens EDA. “We need to improve the runtime, or the coverage, or the speed of the actual EDA core tools. And then, we need to improve the productivity of designers themselves — and particularly junior designers. The industry needs more and more engineers, and we need to ramp up these engineers.”
Improving the tools has been an ongoing effort, starting with shifting more tasks left in the design flow. But much more is needed.
“One way is improving the core technology itself,” Gupta said. “SPICE simulators and improving the core solver technology are one dimension of how we’re doing that. Another dimension is the hardware the tools are running on, such as GPU acceleration. What are the opportunities for using GPUs compared to traditional CPUs to accelerate the runtime and to do parallelization? Where is it possible? We’re also seeing a lot of customers looking at adopting the Arm architecture to improve runtime and potentially to reduce the cost. A third area is AI. How can we apply not just traditional machine learning techniques, but also reinforcement learning techniques, generative AI, and agent-based AI? There’s a lot of innovation going on in that area to improve the productivity of tools, putting AI under the hood to improve runtime, coverage, and the user experience. Can a junior designer now use generative AI, where they can say in ChatGPT-like ways, ‘This is the task I’m trying to do?’ The large language model is able to give answers as to, ‘This is how to get the results more quickly. This is how you set it up.’ Then there are agents. Can we have agents to automatically run the tools with a natural language interface?”
AI-related improvements
AI adds a whole new set of options, but it comes with a learning curve. “There is certainly an increase in understanding, and in some cases deploying some AI algorithms to speed up the work of automated apps,” said Ashish Darbari, CEO of Axiomise. “There are cases where EDA has automated apps for some time, such as connectivity checking using formal under the hood. But with AI/ML chips, the scale and performance of connectivity checking is constantly getting pushed. In the case of formal verification tools, the top vendors in the industry are spending a lot of money in increasing compile-and-elaborate time, making SAT solvers faster, and looking at the problem of scalability. They’re also investing in building AI agents to guide verification engineers live during the verification, a bit like a co-pilot.”
Much of this is new for EDA. “We’ve moved from manually drawing schematics, to writing RTL by hand, to more abstract approaches like high-level synthesis (HLS) and structured verification with UVM,” said William Wang, CEO of ChipAgents. “Each step has offered productivity gains by raising the level of abstraction or improving automation in specific phases of the design and verification flow.”
But EDA is approaching the limits of what traditional abstraction and scripting can achieve. “HLS and UVM have helped reduce effort in some domains, but they still demand deep tool expertise, long learning curves, and labor-intensive debug cycles,” Wang said. “As chips scale to designs with billions or even trillions of logic gates, these methods alone cannot keep up with the growing complexity — especially as architectures become more heterogeneous and design timelines compress. We have created a system of AI agents, purpose-built for chip design and verification. Rather than forcing users to conform to a fixed abstraction or methodology, the technology integrates directly into the flow — understanding design intent, parsing complex specs, generating and validating RTL, suggesting micro-architectures, synthesizing assertions, and even explaining waveform anomalies.”
This creates an opening for new tools and approaches. For example, AI agents can be layered on top of existing EDA tools. “Rather than replacing the existing toolchain, it can be augmented with intelligent agents that generate RTL and testbenches from spec, interpret waveform outputs, debug tracebacks, and adapt prompts to internal codebases and naming conventions,” Wang said. “That can dramatically reduce iteration time and manual overhead for both design and DV engineers. So just like utilizing the latest processors for parallelizing simulations, we also use modern hardware to accelerate AI agents.”
This doesn’t replace traditional EDA algorithms. But it can help streamline the workflow, particularly if multiple agents are coordinated and context-aware.
“We’ve seen this reduce manual iterations in UVM test environments by identifying constraint and coverage bottlenecks early,” Wang said. “Instead of the traditional waterfall flow, teams are adopting agentic AI workflows to reduce iterations. For example, they might begin with a micro-architecture plan and evolve both the design and verification assets in tandem, using this technology to maintain the design intent in natural language alongside the implementation. It also helps new team members quickly get up to speed by querying design history conversationally. Across our early deployments, we’ve observed a 10X productivity boost in verification and debug workflows, along with measurable improvements in onboarding efficiency and developer satisfaction.”
Non-AI improvements
AI isn’t the only source of improvements, however. The entire tool chain is undergoing changes to keep pace with rising complexity and ongoing talent shortages.
“We have built an EDA vendor-neutral app to verify end-to-end architectural correctness of RISC-V processors,” said Axiomise’s Darbari. “The whole solution does not require any simulation vectors or tests. Instead, it uses formal proofs to establish correctness of all instructions, regardless of when they are issued, how many times they are issued or what was the interleaving of other instructions. This very powerful method has been used to identify loads of bugs in previously verified processors in open-source domains.”
Area analysis of silicon designs for power savings is another area pushing the boundaries of productivity. “An app called Footprint was recently deployed to over 80 designs in the open-source domain, including several RISC-V processors, GPUs and NoCs, to compute component utilization on entire silicon without needing any testbench,” Darbari said. “The results were staggering in some cases. Loads of cases were identified where design components like registers, arrays, FIFOs, and counters were not fully utilized (i.e., partially redundant or fully redundant), but were burning power. These issues are not getting picked up by any other way.”
Accelerating everything
One of the challenges here is that a linear flow no longer works for complex designs because it takes too long. This is the whole idea behind shift left, and the industry has been pushing hard to develop more parts of the design concurrently. The problem is that designs are becoming so multi-faceted and interconnected, and the number of dependencies and interactions between the various components is so complicated, that sorting through all of the various pieces and keeping the development process progressing smoothly is becoming more difficult. Tools, IP, methodologies, and processes all are racing ahead, and keeping track of them all is having an impact on first-time silicon success.
“Our business was primarily dictated by Moore’s Law, meaning every 18 months a new process geometry would come in and we would upgrade our IPs to that next process geometry,” said Manmeet Walia, executive director for product management at Synopsys. “Now it is being dictated by AI workloads, such that the end application is driving the standards and the process node. In fact, some developers don’t even care about the process node anymore. They need what they need in terms of the compute power and I/O bandwidth, and we have to deliver on that.”
Chips developed at the leading edge are still using chiplets developed at new process nodes, but those chiplets also increasingly are being packaged together with other chiplets and memories developed at older process technologies.
“2nm is now going into angstrom nodes, while compute bandwidth continues to scale with the process geometry, but not the I/O bandwidth,” Walia said. “This means we need to massively innovate in the SerDes technologies, the UCIe technology, the memory interfaces, the DDRs, the HBM, to deliver upon that I/O bandwidth to keep pace with the compute power. For I/O bandwidth, we need to innovate at an unprecedented pace. Even with standards, where spec cycles are much longer than silicon cycles, those specs are moving to the next revs quicker and quicker, and the market makers don’t even care about specs many times. Many hyperscalers want to go above and beyond the specs. Also, we are seeing massive shifts in technology, not only with the 2.5D and 3D-IC, but with technology like backside power now being introduced in angstrom nodes. All of this has implications on how we make signal IPs, as these are I/O technologies. We are engaged with four distinct foundries, and what our customers are now demanding has gotten very complex — more comprehensive solutions. It’s not a PHY and a controller. It’s not even a complete solution. It’s a very comprehensive solution, which is more likely than not packaged as a subsystem with thorough packaging guidelines, with the exact recipe of how you’re going to integrate this into your SoC.”
And all of this has ratcheted up pressure on EDA and IP providers to innovate much faster.
“We cannot throw in more bodies to do more work,” Walia said. “We have to look for the modern infrastructures that are coming up to improve our productivity, and that’s another big paradigm shift. We have to look for ways to innovate, and when we innovate, we have to go down the path of using AI within our tools. Hyperscalers want to be one generation ahead (OGA), so getting it first-time-right is absolutely critical because the spec cycle is so short. If we do not get it first-time-right, then we lose the market window.”
Where EDA productivity is headed
Spreading this out linearly allows time to assess the possible interactions and behaviors and fix any problems. But nobody has time for that, so more needs to be done concurrently, and keeping everything in sync without overlooking potential problems is incredibly difficult.
“The future of EDA productivity isn’t just higher-level languages or new verification frameworks,” said ChipAgents’ Wang. “It’s AI agents that work side-by-side with engineers, guiding, augmenting, and accelerating their work with domain-specific intelligence. This doesn’t just automate the trivial. It helps engineers reason through problems, surface relevant context, and make architectural tradeoffs faster and more confidently. To unlock true scalability at the trillion-gate level, the EDA industry needs to move beyond scripts and templates. It needs intelligent systems that autonomously integrate contexts from code bases, past designs, and evolving specs — and which can contribute meaningfully to architecture, design, and debug in real-time.”
The goal is faster time to more accurate results, but in a way that is more accessible to engineers. “It’s not just about speed-up. Speed-ups are good for consumer AI. But when you’re doing EDA AI, it’s got to be verifiable,” Siemens’ Gupta said. “It’s about being able to verify that the algorithms are producing the correct results. It’s not just black-box usability. It has to work out of the box, and we don’t want designers to have to become AI experts. The AI technology should also work across the board and be robust. It must work under the heterogeneous environments of on-premise and cloud infrastructures that users are deploying, and it must have brute force accuracy. You don’t want hallucinations in the AI results.”
Conclusion
Keeping up with more permutations and dependencies, in tight market windows, and with fixed or smaller design teams, has created some intensive challenges for design engineering teams. Help is on the way, but it will take a variety of approaches, not just one improvement, to keep pace with all of these changes. It also will require a questioning of how things have been done in the past, and how they should be done in the future. And all of that needs to be baked into training for engineers so that young engineers can be far more productive from the get-go.
“What if we could use ChatGPT-like capabilities to really up-level junior designers to be able to do expertise related tasks more efficiently? So, instead of reading manuals, be able to ask questions in natural language and be able to get the responses. That’s one use case for improving design productivity,” Gupta said. “Let’s say I want to set up my testbenches, I want to set up my measurements, I want to configure my runs in a more efficient, ‘just right’ way, instead of having to go talk to a senior engineer? What if I could have a generative AI capability be able to give me that information?”
Big changes are coming to every aspect of the chip design process, and the whole industry will be watching how effective they will be.
Related Reading
Slow Progress On Generative EDA
The dream may be to have generative AI write RTL, but text is only one of the necessary things AI must understand to help with many design and implementation problems.
How AI Is Transforming System Design
LLMs and machine learning are automating expertise in an aging workforce.
Using AI To Glue Disparate IC Ecosystem Data
Why the chip industry is so focused on large language models for designing and manufacturing chips, and what problems need to be solved to realize those plans.
Leave a Reply