How AI Is Transforming System Design

LLMs and machine learning are automating expertise in an aging workforce.

popularity

Experts At The Table: ChatGPT and other LLMs have attracted most of the attention in recent years, but other forms of AI have long been incorporated into design workflows. The technology has become so common that many designers may not even realize it’s a part of the tools they use every day. But its adoption is spreading deeper into tools and methodologies. Semiconductor Engineering sat down to discuss AI’s expanding role in design with Syrus Ziai, vice-president of engineering at Eliyan; Alexander Petr, senior director at Keysight EDA; David Wiens, product marketing manager at Siemens EDA; and Geetha Rangarajan, director of product management for generative AI at Synopsys. What follows are excerpts of that discussion.


L-R: Eliyan’s Ziai, Keysight’s Petr, Siemens’ Wiens, Synopsys’ Rangarajan.

SE: Looking back over the past several years, where has AI made a significant impact on day-to-day chip design? 

Rangarajan: It started off more about predicting what would happen as the next step of a particular task. As an example, if I run N version of the design, can N+1 now predict what the delay is going to be without having to run through the steps? That was the early beginnings of using supervised or unsupervised learning. We’ve seen a lot of that technology being incorporated now, with people not even knowing that ML is being used in the flows themselves. Over the last five or six years, with reinforcement learning becoming available or leverageable for chip design, we’ve seen a lot of these experiments. We’re talking about a lot of parameters that we looked at. You throw in everything, from what libraries to use all the way down to the switches we need to use in the tool flows. There are a lot of decisions being made, and that’s where we’ve started seeing these agents take flows with supervised learning, allowing the exploration of the search space itself. We’ve seen this, starting off with digital domains, and expanding to other areas, as well. I’ve seen this being now explored in verification to understand the right set of tests needed to run in order to hit the same coverage within a shorter period of time, rather than doing a constraint random where I’m trying to figure out what to run, or what the switches should be, or what cells should be included if I’m trying to do a particular process node and I know I want to hit this PPA target. We have expanded it now to the point where we are even seeing analog design, which traditionally has involved very tedious human involvement. As they move to the next node migrations, they are learning from the prior nodes and learning what needs to be changed. Any time there’s a lot of exploration and tedious, heavy lifting of repetitive tasks, that’s where I’ve seen AI play a big role in the last couple of years. With generative AI, there’s a whole new world of democratization happening. Earlier it was about AI helping offload repetitive, tedious tasks. Now, with Gen AI, we can see it truly acting as an assistant, helping flatten the learning curve and allowing people who are at a different experience level ramp up faster.

Wiens: We’ve got tools all the way from IC through package to board, and into mechanical and other domains. The easiest and most accessible AI to everybody is via the chatbots that are in every product. The accessibility, on demand, to anything help-related is an easy thing that companies have added, and that’s the most visible thing people will see. With some AI, sometimes you don’t even know it’s in there, but with chatbots you know it’s there, and from experience working with other products like ChatGPT you know what it can do for you. You also know there’s not much reliability when it comes to something like ChatGPT, so you can’t depend on it. Verifiability of whatever model is critical to the process. If it’s just a chatbot-type thing, that’s pretty easy. You base it on your documentation. It works, but from one release to the next you’d better be using the latest documentation or something might change. The other area is predictive AI applied to simulation, which most recently we’ve seen applied to signal integrity, but other domains as well. It gives you the ability to explore a design space where you wouldn’t even think to simulate all possible permutations of a problem to find the optimal solution, because the compute power doesn’t exist to solve the problem in literally hundreds of years. By applying AI, you can give it a budget, you can give it parameters, and you can let it figure out what the optimal solution is. In the cases where we’ve deployed AI, it still lives in more of that predictive space. The next step, the generative state stuff, would be to apply that to a design. We’re still seeing the desire to have the human in the loop, to be able to look at the result and verify it. That points to trust and verifiability. AI today is an awesome thing. People want to play with it, but in the end they still want to be able to verify its accuracy.

Rangarajan: Tools as checkers in the loop become very key, particularly in the domain of chip design, where there are many billions of dollars on the line. When you talk about technologies, having the AI technology as part of the tools, and having the data validated by it, that’s been a key focus of what we’re doing,

Petr: An important point being mentioned here is space exploration. You don’t try to predict anything outside of the known space. So when we talk about exploring or predicting, it is basically within the space that previously was just too big to explore, but now we have the capabilities to explore. Some people, when they hear generative AI, or when they talk about prediction, assume that we go outside of a known design domain, or outside of the known design space. That’s not what we’re talking about. We’re just saying we now get the capability of doing more with the same resources we have, and as such we can make better decisions. We can find the absolute minimum. That’s very key to distinguish here, because sometimes you read stuff where people say LLMs can write Shakespeare. No, they just parrot what they’ve read. We’re not talking about generative AI when we talk about doing logical decisions.

Ziai: On the design side, we’re the consumers of a lot of these CAD tools and methodologies. We started using ChatGPT a few months after it came out for very small things. Whether you’re a digital, mixed-signal, or analog designer, there are multiple phases of how AI will bring benefits to us. We’re in the very early informative stages. The CAD tool companies have been doing the coursework for several years, and some of the new techniques will get incorporated in a more meaningful way into the CAD tools in the not-too-distant future. That, from a methodology point of view, will carry over to the end users. In the short term, I see a few different stratifications. One is sign-off versus design exploration and optimization. For sign-off, we’re not going to have AI doing meaningful work except to help, assist, and accelerate, for a long time. Let’s say you’re doing functional verification. It’s designed to make ASICs. There are two things that you’d like to do. Number one is, you want to make sure that you have zero bugs, ideally at your zero tape out. The second thing is you want to compress the schedule, and the way you do that is to have higher coverage tests earlier in time, as opposed to saying, ‘Okay, I need to run 10,000 tests. I’m going to start with number one, and when I’m done, the RTL people are going to fix that.’ Ideally, what you’d like to do is find more of the bugs earlier in time. AI can accelerate, but not necessarily be at the signoff and cross check. For some number of years that’s not going to happen. The second one is design exploration and coming up with different ways to do the same circuit, whether it’s digital, analog, mixed-signal, or what have you. I expect there are going to be methods to help assist engineers in thinking about it.

Rangarajan: I agree. A lot of the technologies we’ve been building lend themselves to this agentic workflow. You want the AI system to take a natural language-based question. We have all these chat assistants today that have been trained on all this information, then under the hood, decide, ‘To do this, I’m going to call the exploratory engine, which is going to maybe have a model of what needs to be looked at in a quick way.’ Today, we have all the building blocks. We have the search space exploration, though it’s mainly used by experts who don’t have the hours to look at all these things. But they do know these parameters need to be explored. They let the system explore it and come to that conclusion, rather than running 100 experiments. Instead the system decides, ‘I’ll bring it down to five tests, and here’s the minimum that you need. You need to bring that and the chat assistant, which brings together a couple of different machine learning technologies and more domain-specific applications, and we’ll get there.’ That’s essentially getting from where we are today to eventually an agentic workflow, where the chat assistant can be the interface to answering or at least providing guidance.

Wiens: When you think about agents, you think about experts. You’re applying experts to the problem and, to make it worse, it’s a multi-discipline problem. For example, you’re looking at your signal integrity, your power integrity, and potentially your thermal impact. Then, pile on manufacturability, too, and trying to make the tradeoffs between those. It’s one thing to say you found the optimal solution for signal integrity but does it meet those others? To be able to bring in, in essence, those experts and consider all of those simultaneously — that’s the secret sauce. When you can optimize across multiple disciplines, then you’ve found the Holy Grail.

Petr: If you look at those multi-domain problems today and you cannot link them or automate them, if you cannot run them together, there’s no agent that will be able to explore the space and make a decision for you. One of the key challenges we have right now, especially if you look at the designers, is they’re very segregated. You have an SI designer, you have a PI designer, you have a thermal designer. You have three guys basically trying to optimize the problem. Those domains need to come together to build something that can drive all of it at the same time. So we’re looking at multi-dimensional problems that we haven’t explored before, because humans have a certain capacity and they’re the domain experts. You must now mix multiple domains together and get to the point where you can automate something that has not been automated before. This is why currently we see a lot of automation in the digital space. The digital space is miles ahead when it comes to automation. They have very rule-driven designs. Everything is very restricted. You can only change two things before something breaks. If you keep expanding this node to the analog/mixed signal or RF domain, it’s still considered an art. You see people who draw layouts. If you look at designers with 20 or 30 years of experience, they don’t even look at some of the stuff. They just know if it looks this way, it’s good, and only then do they run simulations. Now we have to turn the whole thing upside down. We need to start coming up with restrictions and rules and, at the same time, also think about why we do this. One of the big motivators we have is our aging design experience. A lot of those experts are going to disappear soon, so we’d better find a way to archive that knowledge and automate the heck out of it before we put AI on top of it.

Rangarajan: Absolutely. The flattening of the learning curve comes in. We have a workforce gap. It’s a big problem in the analog domain, but overall, how many graduates are coming in and taking on this domain in general? It’s more complex. We are seeing this with customers saying, ‘How am I supposed to bring in someone who probably didn’t study this in school? I need to ramp them up quickly. I have my workforce, that wants to retire, burdened with constantly ramping them up.’ That’s where this democratization with GenAI is going to help.

Petr: If you go along this thought process right now, we’re talking about automation. Automation is different from the old approach. The old approach was you had GUI-driven tools. Now you’re talking partly about headless tools, which are driven from code. The skills that you’re looking at now are also largely different. They’re not necessarily EE experts anymore, but they are EECS people. They know more about coding than maybe they know about RF design, so the tools will also massively change. If you’re talking about GenAI, it’s a prompt engineer you’re looking at. They don’t necessarily need to know how things will be designed, but they need to know how to instruct an agent to do what they want to do, and then they will just play around. We’re looking at a world where people are not experts in how to implement this stuff, but they can focus more on what they’re trying to solve. A designer has a specification they need to meet at the end of the day. They don’t care how they meet it. They need to be able to produce it in the fab. It needs to be reliable. Hopefully, they get it right in the first run. They just want the design to work, but they don’t necessarily care what has been done in the design to get it to work.



Leave a Reply


(Note: This name will be displayed publicly)