Autonomous Design Automation: How Far Are We?

How machine learning interacts with chip design in different ways.

popularity

The year is 2009, during the Design Automation Conference (DAC) at a press dinner in a posh little restaurant in San Francisco’s Civic Center. About two glasses of red wine in, one of the journalists challenges the table: “So, how far away are we from the black box that we feed with our design requirements and it produces the design that we send to the foundry?” We discussed all the industry’s improvements over the past couple of decades. But autonomy at that scale? We didn’t see it happening. Today, 13 years later, given the growth in computing capacity, the emergence of the cloud, and advances in artificial intelligence and machine learning (AI/ML), what would the answer be?

The answer has two dimensions: the top-to-bottom design flow from IP selection through implementation itself and the level of autonomy that seems achievable. To think of a framework, let’s look at the levels of autonomy in transportation as an analogy.

Levels 1 and 2 are about assisting the driver with relatively simple mistakes. Adaptive cruise control (ACC) is positioned for braking in level 1 and steering in level 2. From personal experience, the ACC from braking in my Kia Niro works quite nicely for following the car in front of me on the freeway, and I find it easy to trust. While ACC for steering took more time for me to get comfortable with, I now appreciate the jerk on the steering wheel combined with the warning signal when I am getting too close to the lane boundary. (The Gen Z driver in my family considers that sort of car behavior to be judgy-adjacent and as commentary on her driving, but so be it.)

The higher levels of autonomy include driving conditions—50mph for level 3 and 100mph for level 4—and the amount of control remaining with the driver. Level 3 contains items such as driver-initiated lane changes, automated valet parking, and the function of a traffic jam chauffeur. Level 4 enables automatic lane changes and cruising chauffeur functionality for free driving. In transportation, the autonomy of level 5 gets us to robo-taxis and autonomous shuttles at all driving conditions.

How about electronic design automation (EDA) tools in comparison?

In general, EDA utilizes complex computations of algorithms at its core­—it is computational software. Tool users manipulate tool settings, options, and commands to find the best parameters to operate the tools dealing with the various design representations. Like ADAS and autonomy in automotive, we can look at AI/ML productivity improvements from two perspectives: “AI/ML inside” and “AI/ML outside.” ML inside is the application of supervised ML models that are not visible to the user, augmenting the EDA algorithms with optimization algorithms using a deep neural network. Examples are deep neural networks as a proxy cost function. Users experience ML inside mainly as better results and improved productivity. As “assistance,” if you will. Examples include better timing prediction for layout in digital implementation and predicting yield-limiting hotspots in design for manufacturing. ML-enabled formal verification learns which solvers work best for which styles of design and how to optimize resources in proof orchestration, picking the most appropriate engine for specific verification problems.

In contrast, “ML outside” models are directly visible to users and are used to set options, constraints and control flows outside individual EDA tools. This captures the designer’s experience and knowledge, making it applicable by creating training sets for AI/ML and then using the ML data, trained with data sets, to guide tool behavior and flow optimization. EDA has become more autonomous in this area, and we have recently made significant steps in two domains that take up most of the development efforts­—functional verification and digital implementation.

For instance, users can now set targets like coverage in functional verification, and ML figures the results out within the scope of defined resources. For ML-assisted regressions, the user decides on the specific regression for training, chooses what random variables to use and defines coverage metrics and the subset of coverage space. ML creates a new set of runs based on the user’s goals. The user iterates to improve the models and remains in control, pushing the buttons. Still, it is easy to imagine further autonomy from here.

Digital implementation has arguably made the most significant steps towards autonomy recently. ML-enhanced digital implementation achieves better power, performance, and area (PPA), improves “full flow” productivity, and automates floorplan optimization. Reinforcement learning interacts with synthesis, implementation, and signoff to automatically improve PPA by applying a user-defined amount of computing resources. We have seen cases where ML-enabled digital implementation converged on an improved flow for a CPU for mobile applications in 5nm technology within ten days using 30 parallel computing jobs. It improved performance versus a baseline manual result by 14%, leakage power by 7% and density by 5%. This level of autonomy for EDA is between “hands-off” and “eyes-off” compared to autonomy in automotive applications.

Still, there is much more to come. Thinking back to that conversation in 2009, the question itself deserves critique. Are the requirements fed into the black box defining “just” the chip? A hardware/software system? Or the design’s intended function entirely independent of any implementation choices?

Bottom line, for the individual parts of the flow from IP selection through verification, digital and custom implementation, and system design, both assistance and autonomy for design automation have come quite far. You can find some recent overviews at IntelliSys and IEEE IoT Vertical and Topical Summit at RWW2022.

As an industry, we will refine the different levels of Autonomous Design Automation further over the years to come. Eventually, the combination of the different steps of the flow with AI/ML will unlock even further productivity improvements. How long will it be until designers define a function in a higher-level language like SysML and, based on the designer’s requirements, autonomously implement it as a hardware/software system after AI/ML-controlled design-space exploration?

We are not quite there yet, but the path is becoming more apparent. Here’s to a bright AI/ML-enabled future!



Leave a Reply


(Note: This name will be displayed publicly)