中文 English

EDA Vendors Widen Use Of AI

Leveraging data across IC design flow improves time to market with more exploration of options and better optimization.

popularity

EDA vendors are widening the use of AI and machine learning to incorporate multiple tools, providing continuity and access to consistent data at multiple points in the semiconductor design flow. While gaps remain, early results from a number of EDA tools providers point to significant improvements in performance, power, and time to market.

AI/ML has been deployed for some time in EDA. Still, it has been mostly hidden from user view, and in some cases it isn’t entirely clear whether the tools are really using some form of AI or whether they should be classified as more simplistic expert systems. But as chip design becomes more complex at each new process node and in advanced packaging, unleashing a flood of data, EDA vendors are putting more effort into using AI/ML to sort through that data and weigh a variety of options.

“In the last few years, every one of our tools has absorbed AI and machine learning capabilities,” said Aart de Geus, chairman and co-CEO of Synopsys. “But it’s really been as a computational complement or multiplier on our existing algorithms. Now what has happened is we have applied AI/machine learning techniques on part of the design flow.”

The goal is to shorten the time it takes to fully design a complex chip that has been optimized for performance and power, which can include everything from customized accelerators to various types of memory in unique packaging configurations. “It’s committing to a 1,000X productivity improvement this decade,” de Geus said. “But two things have changed in the past year. One is that the amount of data massively increased since 2018. Machine-created data dwarfs what humans are creating. At the same time, machine learning has just arrived at the point where computation is good enough.”

Fig. 1: Benefits of AI/ML in EDA for data center chip design. Source: Synopsys

One significant change is a recognition that AI/ML is an engineering tool, not a push-button solution. It allows users to comb through large amounts of data quickly, but domain-specific expertise is still required.

“This works like assisted driving for the RTL through sign-off flow,” said Kam Kitrell, senior group director for digital and signoff marketing at Cadence. “It’s not completely automated driving, but it does provide productivity improvements by expanding what engineers can do. This will make sure that different collateral and different steps are follows in a certain order so things are built right and done right, and that there’s a communication mechanism. They’ll get a basic working flow going, and then they’ll look at the power, performance, and area compared to their sign-off objectives, and start tweaking different things. Maybe they’ll try to hold the performance, if they’ve met that goal, and reduce the power as much as possible. Anything toward zero power in a certain amount of time is usually the goal, with some limit that you can’t go above. And they’ll spend a lot of time on this for a block.”

While much of this used to be a fairly straightforward engineering challenge, the number of options and potential interactions has exploded. There are many ways to achieve the same goal, but some are better than others in the context of a larger system or a particular application. Still, keeping track of all the possible tradeoffs and options in complex chips is well beyond the capacity of the human brain. AI/ML can help sort through the options and save time in terms of identifying the best possible choices, as well as eliminating what doesn’t work.

“A lot of it is controlled in the tool with different knobs, such as how you deal with congestion and different macros,” Kittrell said. “For anything you do, there’s usually three or four selections of different controls you can put on this. Once you have a basic working flow, you can tell it the PPA objectives and which knobs it is allowed to turn, which can be high, low, or medium effort optimization.”

Fig. 2: Automated floor planning options using AI/ML. Source: Cadence

For design teams, this also opens the door to using probabilities as part of the solution. The goal is still to maximize performance while minimizing power, yet it also allows design teams to begin analyzing the impact of different architectures based on such factors as energy efficiency. For example, different layouts and interconnects can have a significant impact on how far data needs to travel, the amount of resistance and capacitance for moving that data, and the overall performance of a system, sub-system, or even an individual block. It also can have a big impact on localized and system margin, power delivery network design, and the ability to test and inspect a device to improve reliability.

“Google has been doing some really great work on chip layout where they’re applying AI algorithms to really optimize the layout of different blocks on a chip,” said Steven Woo, fellow and distinguished inventor at Rambus. “They’re not the only company looking at this work, but they’ve been quite vocal about it. Layout, in particular, is a very labor-intensive task and it’s one that you have to iterate because when you first develop a chip, you think about what the performance is going to be, but it really does depend on where things sit relative to each other. The distances matter, the capacitances matter. You do an initial layout, and then you go back and back-annotate your simulations with the real capacitances, etc. That may not all be perfect, so you have to go back and iterate again and again. Being able to both make that cycle faster, and being able to almost reduce the number of iterations by using AI can really help you quite a lot.”

What’s different now?
To some extent, these developments are evolutionary. None of this would have been possible, for example, if EDA vendors had not been parallelizing the processing in their tools over the past decade. Initially, most of these tools were single-threaded, but in order to keep up with increasing density, EDA vendors have been spreading computations across multiple processing elements.

That was step one. The next step was to overlay AI/ML capabilities on the EDA tool algorithms, and those have been used piecemeal in EDA tools for the past several years as vendors rather quietly expanded capabilities and assessed the improvements. Now, with data to back up those improvements, vendors are trumpeting the results and the new effort to tie together data so that it can be used at multiple points in the flow. Data increasingly will be able to shift both left and right, allowing engineering teams to tap it as needed at any point, such as a last-minute ECO.

“Instead of an algorithm-based data solution, now you have a data-based solution,” said Anoop Saha, market development manager at Siemens EDA. “Almost every tool in EDA is using that. The challenge is that we are using combinatorial algorithms that have been optimized over a few decades. So how do you figure out whether machine learning will do it better than what you can do using just combinations? It doesn’t work everywhere, but there are chances for improving almost everything just by looking at the data. So at a minimum, it should improve the performance and/or resource utililization. And to be useful, that should be a minimum of an order of magnitude improvement. You also can optimize flows like debug, where a user spends a lot of time in coverage and closure. There’s also design exploration and power analysis, where you’re trying to isolate the optimal solution. That requires a combined effort between the EDA developers and the customers.”

Fig. 3: ML uses in optical proximity correction. Source: Siemens EDA

Fig. 3: ML uses in optical proximity correction. Source: Siemens EDA

AI/ML technology also has changed significantly over the past decade. Instead of doing all of the computation in the cloud, the inferencing piece can be done on a much smaller system. EDA vendors have been researching and experimenting with this technology over the past few years, and they are convinced the effort has been worthwhile.

“In 2012, when AlexNet came out, a lot of researchers jumped on this,” said Nick Ni, director of product marketing for AI and software at Xilinx. “And now, in automotive, the data center, medical, and industrial, there is a lot of commercialization going on. EDA is the next big thing because the algorithms are so complex. They’re full of heuristics. And companies like Xilinx and the EDA companies have been accumulating huge amounts of learning from data, which we can re-use to train the network and complement the heuristics we have to further improve the results, and to provide faster convergence on the results.”

Ni said that in the latest papers, the average QoR gain is 1% or 2% without ML. “With machine learning, we’re seeing 10% or higher results improvement, which is quite a breakthrough,” he said. “Another benefit is faster design iteration. Today, if you’re trying to do a very full or high-frequency design, you have to go through many, many iterations, and maybe try many different settings or strategies to get to what you need in terms of high frequency. But with machine learning, based on past experience and data, we can pick the right path much quicker. You don’t have to run as many iterations to get to the goal you need.”

This is essential for keeping pace with increasing complexity at each new process node or advanced package iteration, as well as the increasing customization demanded by end users for different applications.

“We’ve got two customers with more than 100 RISC-V processors with vector engines on their chips, processing small frames and the machine learning convolution algorithms in parallel,” said Simon Davidmann, CEO of Imperas. “There are 100 processes, all running in parallel, with simulation algorithms, which are mostly running in parallel. Now you have hardware guys building things that the software tools traditional find it hard to help with, such as architectural exploration, functional software development, verification, and performance analysis. And the general-purpose tools are finding it hard to keep up with next-generation processes. Traditionally, the current generation of tools makes use of the current generation of processes. The next generation is 100 times more complex, and tools have a challenge keeping up.”

AI/ML can help considerably here. “Imagine that you have millions of slight incremental optimizations, and now you want to look at different clocking schemes to figure out which one is better,” said de Geus. “By changing your clocking scheme, it typically meant that you would have to change so many things that you just couldn’t do it. But now we can, and what we saw was that suddenly everything jumped into a different space and it was actually better. Let me give you another example that surprised me. So apps A, B, C, and D were all running fine, but with app E, there was a huge spike in power. The dynamic power is impossible to estimate just by looking at the chip. You need to know what’s happening in the software. What we’re now able to do is to connect the notion of understanding the activity of an app to optimizing the chip for it.”

Not all the pieces are in place, and the completeness of the integration of AI/ML tools varies from one vendor to the next. But the trend is clear for the EDA industry.

“This is a good first step,” said Cadence’s Kittrell. “We’re see a big change in the way things can be done that will positively impact our customers and the delivery of chips in the future. There are lots of people getting their minds wrapped around this right now. We’ve been out training customers on the potential, and talking to different product groups within our company on the digital flow just to make sure everybody’s thinking outside the box on how they can make their tool solve some of the longstanding problems more efficiently.”

Shifting priorities
To some extent, the push for a broader use of AI/ML reflects the growing reliance on commercial EDA tools, and demand from those companies to get to market more quickly with highly customized designs. Even vertically integrated companies, such as Intel and IBM, now rely on commercial tools for designing chips, and that need will only grow as designs become increasingly complex and heterogeneous, with a variety of advanced packaging options and system constraints.

One of the less-obvious reasons for this shift is that what used to be handled by the foundries to improve yield — typically through margin and a lengthy rule deck — has shifted much farther to the left of the design-through-manufacturing flow. That margin is no longer acceptable to users in advanced-node and advanced-packaging designs, because it negatively impacts power, performance and area. So the pressure has shifted to design teams to improve yield.

“Yield is statistical variability,” said de Geus. “So there are rules that two lines can only be this close. These are not real rules. They’re just what somebody decided. But if you put the lines a little closer, you change the probability a little bit, and further away, you’ve changed it again. So one of the things we do, for example, is that when there is a space, we purposely move things further apart because it improves your statistics. You have to optimize first for variables like speed and power, because that’s your spec. After that, reducing area is good, but reducing congestion is really good because you cannot put margin in those places.”

In addition, it takes less resources to fix these problems than with previous methods, which reduces the barrier of entry for companies designing chips or chiplets. “Now it’s not as brute force because it’s data-driven,” said Siemens’ Saha. “We have the knowledge of which simulations we should try — which ones are the most effective to get to the target. That’s a key difference. You run far fewer simulations, 10X or 20X, and still get high sigma on your library.”

Conclusion
The use of AI/ML is spreading in the EDA market. There are more tools being integrated, and EDA companies are figuring out how to best utilize AI/ML, and equally important, where it doesn’t really add much value.

The ability to optimize designs at multiple points in the design flow is a huge win for chipmakers. What remains to be seen is just how far to the right this technology is implemented, and how this ultimately will tie multiple pieces together. EDA tools providers have just scratched the surface of what can be done with AI/ML, but they are moving aggressively toward integrating more pieces. Time will tell how all of this will work out, but at least at the early stages it looks very promising.



Leave a Reply


(Note: This name will be displayed publicly)