Where Are We On The Road To Artificial Intelligence In Chip Design?

Choosing the correct approach is key to using machine learning.

popularity

It’s hard to find an article today that doesn’t talk about how Artificial Intelligence is going to solve every possible problem in the world. From self-driving cars, to robots running an entire hotel (in Japan), to voice assistants answering your every question, it appears that every problem can be solved with AI. As so often in life, the true answer is: it depends. It depends on the nature of problem, of the data available, and of course on further upcoming technological advances.

I often get the question: Machine Learning and the concepts of AI have been around since at least the 1960s, so why now? To which I typically answer: Did you know that the first electric car was invented in 1884? And here we are, more than 100 years later, and finally it is just becoming practical. Many of the necessities like high capacity battery technology, a power grid that can support a large amount of current for charging, and solar cells that enable you to charge your car at home weren’t available in the past. Similarly, machine learning has taken off due to the many advances in compute power and reduced cost, storage capabilities for vast amounts of data, and many algorithmic advances, like the deep learning revolution and the advances in reinforcement learning to beat humans in even complex games like Go.

So what about chip design? On the surface, many challenges in the implementation, signoff and verification of an integrated circuit appear to be very good applications for machine learning. After all, there is “a lot of data” to mine, there are many highly complex problems that can’t be solved with traditional analytical methods efficiently, and on top of that, the complexity is ever increasing. Every new technology node adds more and more physical rules, which are simply a way to describe a desired (or undesired) layout structure. Over time, these rules become a burden to traditional implementation algorithms, as they slow down the system and push us ever further from the optimal result.

When it comes to the human aspect, the chip design process is still highly manual and labor intensive, despite all the advances in design automation tools over the past decades. That’s primarily due to the high complexity of the problem being solved, coupled with an almost infinitely large input space that is presented to a design implementation system. The choices are nearly endless and go well beyond the design inputs (such as RTL, netlist, floorplan, hierarchy choices) to technology and process choices (what library cells are best for power or timing), to design collateral (PVT selection, layer stack) and tool and flow settings. Designers spend months to tune all these inputs, until at some point they either reach their target (usually trading off some metrics) or simply run out of time and need to tape out. Take a look at the complexity of just a simple macro placement problem, versus the solution space of games like chess or Go in the below illustration. That’s the main reason why EDA tool are full of heuristics, as even just one small step in a design implementation tool is an NP complete problem in itself.

What can machine learning do to automate these complex problems? Quite a lot, but the actual application and techniques being used depend highly on the problem space. As an example, while there technically is “a lot of data to mine,” in reality the amount of useful data to train a predictive model is quite limited. Unlike social media where data that can be harvested seemingly without limits, chip design data is available only in a fractured environment, and technology is constantly changing. Imagine a self-driving car that that would have to recognize new rules or road signs every few months and is only allowed to train on portions of the road at a time. In chip design, the training of machine learning models is likely to happen in each customer environment independently at the design level, and for each foundry ecosystem at the technology node level.

Despite these challenges, there are very good applications of machine learning, if you choose your approach wisely. Clearly, unlike image recognition algorithms, we can’t train on millions of static images to predict an object. As such, I don’t expect purely predictive models to be practical at the design level. Instead, predictive models make most sense to replace a heuristic inside an algorithm of a tool; think of a more accurate prediction of route DRCs at an early design stage, or the choice of a delay model in optimization and even signoff. At Synopsys, we have achieved some very impressive results by implementing such techniques inside our existing EDA tools, for example a 5x Faster PrimeTime power recovery, or a 100x faster high sigma simulation in HSPICE.

At the human designer level, there are many possible machine learning applications to help automate the design process, with the goal to strive towards a “no human in the loop” design flow. In this area, there are many possible applications ranging from more predictive flows, for example the automatic selection of input choices for a design flow, to faster debug and, eventually, more autonomous processes, such as automatic floorplan creation and complex design decisions. These applications will use a variety of different types of machine learning techniques.

The availability of cloud compute services, the reduced cost of compute and the emergence of specialized AI accelerators will play a key role in enabling these automation pieces in the future.

The road to artificial intelligence in chip design is not a simple one, and things won’t happen overnight. But there are many promising technologies that have the power to completely transform the way chips are being designed. Although AI will bring groundbreaking potential, it will be empowering to designers rather than replace them.



Leave a Reply


(Note: This name will be displayed publicly)