How ML can enable self-optimizing tools that look for DRC hotspots, EM/IR distribution, and more.
AI is transforming the world around us, creating an avenue to innovation across all sectors of the global economy. Today, AI can interact with humans through natural language; identify bank fraud and protect computer networks; drive cars around city streets; and play complex games like chess and Go. Machine-learning is offering solutions to many complex problems around us where analytical solutions may be too expensive or practically impossible. How about chip design? Can ML offer solutions to key problems in semiconductor engineering?
A deluge of design challenges
Over the years, the EDA industry has offered many solutions in the modeling and design creation of complex systems. Most design problems in EDA are NP-hard; there are simply no polynomial-time algorithms to solve these problems and hence an optimal solution cannot be identified analytically. Today’s EDA systems are finding it difficult to keep up with advanced process node requirements due to a deluge of new design challenges (figure 1).
To make things worse, these requirements are interdependent and need to be considered concurrently across multiple planes of design optimization. The actual application and techniques used depend highly on each specific problem space. How does one prepare a general solution for a specific problem when there is limited access to the design environment?
AI-enhanced design tools that learn and improve
Machine-learning (ML) offers opportunities to enable self-optimizing design tools. Very much like self-driving cars that observe real-world interactions to improve their responses in different (local) driving conditions, AI-enhanced tools are able to learn and improve in (local) design environments after deployment. These new capabilities can be embedded in different design engines, giving EDA developers a new arsenal of solutions for today’s demanding semiconductor design environment.
Example: Fast delay prediction during optimization
Advanced-node complex physical effects & foundry rules can impact design convergence. A variety of modeling capabilities for signal integrity, waveform propagation, noise, etc. exist that accurately calculate delay. However, these capabilities are computationally expensive and need to be used with care during pre-route design steps. An ML Delay Predictor is a statistical model that can be trained to capture timing at multiple stages of design evolution, offering upstream engines faster visibility into complex downstream effects, and enabling better decisions. The delay predictor improves design convergence and accelerates design evolution towards better PPA (figure 2).
Extending the ML Predictor paradigm
An entire class of ML Predictors can look for DRC hotspots, EM/IR distribution, and much more. Additional classes of ML models offer a variety of benefits to self-optimizing design tools (figure 3):
These engines can be pre-trained up-front or allowed to self-train during design. They continuously learn and improve in the design environment to enable faster time-to-results with better QoR.
Next: An AI-enhanced design platform
Synopsys introduced the industry’s first AI-enhanced tool (Primetime ECO) in 2018. We have since steadily continued to introduce new ML models in our design platforms that enable better QoR and faster turnaround time for tough design problems. Today, there are many ML models in Synopsys AI-enhanced tools in the areas of digital implementation, circuit simulation, test, physical verification, and signoff – with many more planned to be released in 2020. We are executing towards a clear vision for the AI-enhanced Fusion Design Platform: An orchestrated, self-optimizing design environment that will go well beyond single-model solutions and deliver even better end-to-end QoR across the design environment – with ML-Everywhere!
Leave a Reply