ChatGPT isn’t coming for your job, but that doesn’t mean there’s nothing to be concerned about.
Experts At The Table: LLMs and other generative AI programs are a long way away from being able to design entire chips on their own from scratch, but the emergence of the tech has still raised some genuine concerns. Semiconductor Engineering sat down with a panel of experts, which included Rod Metcalfe, product management group director at Cadence; Syrus Ziai, vice-president of engineering at Eliyan; Alexander Petr, a senior director at Keysight EDA; David Wiens, product marketing manager at Siemens; and Geetha Rangarajan, director of product management for generative AI at Synopsys to talk about what keeps them up at night when it comes to AI. What follows are excerpts of that discussion. Part one of this discussion can be found here. Part two can be found here.
L-R: Cadence’s Metcalfe, Eliyan’s Ziai, Keysight’s Petr, Siemens’ Wiens, Synopsys’ Rangarajan.
SE: As generative AI technology matures, how will it be incorporated into the workflow?
Metcalfe: Chip design has been through similar technology revolutions in the past, and each time engineering jobs have been predicted to decline. The exact opposite has happened. When chip design migrated from schematic capture to RTL synthesis, suddenly, a single engineer could compile tens of thousands of gates an hour, compared to hundreds of gates an hour using schematic capture. Rather than needing fewer chip designers, we actually needed more because chips got bigger, driven by RTL productivity improvements. It will be the same for AI. AI will enable engineers to do more. Maybe rather than one engineer implementing a few blocks, they will design a whole subsystem of blocks, using AI to implement the blocks in parallel, so larger chips can be created more quickly. The way engineers work will change. Rather than editing tool scripts, they will interact with AI systems. But engineers will still be required.
Wiens: It gets back to the fact that there aren’t enough hardware designers out there. AI may come for aspects of their job — generally, aspects they probably don’t find interesting to begin with. Those are the easier ones to solve. It’s a long time coming, and now we can see that’s out there. AI’s been around for decades, and the last five years have seen a monumental shift in value. It’s hard to say with certainty that that jobs won’t be shifted, but to what degree we really don’t know at this point.
Petr: I would go back to the fact that they will be able to do more. It’s that simple. If you also look at growth, growth requires more output at the same time, and that’s what we’re trying to do. We’re trying to increase productivity. I don’t think AI will replace designers. They will just be able to do hundreds of designs at the same time instead of just one.
Rangarajan: I will go back to what Jensen Huang said when he was asked this question recently. He said AI will only replace a person who doesn’t know how to use AI, who will be replaced by somebody who does, but it’s not going to replace humans. There’s a lot more in terms of what the technology needs to do, particularly for chip design itself, but it’s going to assist folks who have been doing a lot of the tedious tasks. They’ll get to do more. We have already seen AI enabling people to do four additional blocks, or five additional blocks. We had a customer who came in and said, ‘I put in three experts to boost my PPA on this design when it was migrating, and it took them a month, but they couldn’t get there. We used an AI agent and we were able to do it in a week, and it actually found that spot that we had missed.’ That’s where the productivity is now. The experts could have been working on something else, rather than churning these experiments. Leveraging this technology is going to, in the short term, help us focus on the more crucial, innovative tasks that humans need to do. LLMs and all of this technology can only do what they’ve been taught. The creative aspects of looking beyond that is still on us. It’s a long way before we say that AI agents can completely design a chip given a spec.
Petr: Even if we approach that stage, someone still needs to think of the application for the specification. What is it you’re going to design here? Someone needs to specify it and drive the system and then basically check if the result is actually what they expected it to be.
Rangarajan: You’re right. We already do this today. A lot of it is going to be being able to do the right prompt engineering, capturing these specs correctly. I know we had customers who said, ‘I want to be able to make sure that as you go through the flow, what I’ve defined in my spec is being followed.’ How do we ensure that information is pulled and there is a check and balance that all of these agents are doing that? It’s going to be very interesting, where we all need to have an opportunity to learn a lot more of the end-to-end flow rather than be very much focused on just my job and my task.
Wiens: Looking back in history, at least on the system side, the evolution from what people were doing 30 years ago is dramatic. If we were to sit there 30 years ago asking the question about whether this kind of automation would replace people, we’d be talking about the same thing. But clearly the jobs have evolved. They’ve gotten faster. They’re doing more. We caution our customers to think about automation by evaluating it over time. Does it help you? How do you verify it? This is the same. If you look at it on an evolutionary path, the mysticism of it is dissolved and it just becomes technology that can be leveraged.
Ziai: The jobs will be changing, and the people that either can’t or don’t want to change with the new offerings that are provided by this AI technology, they may be at risk of not having a job. Certain jobs will go away, new jobs and tasks will be in front of them, and people need to be comfortable using these new methods and techniques. The majority of engineers in our industry are comfortable with that. If anything, electrical engineers or general electronics engineers often try to put themselves out of their own jobs. In other words, if we do something once or twice, we’re like, ‘Oh, this is boring. How can I not do this again?’ Either we script it, or find this compiler for it, or what have you. The integrity of our industry is very comfortable with that, but there might be some side effects to that, as well.
Petr: It’s just history repeating, right? We like robotics. Automation is the biggest driver of humanity.
SE: Is there anything else about the emergence of AI that keeps you up at night?
Wiens: With generative chat bots my concern is, what are the sources? How did it come up with the answer? In school, kids are not allowed to just produce the result. They have to show the work. What this ultimately gets to is, just like in automation, it’s a black box. What does it produce as a result, and how can I check it? As long as I can check it, then I’m okay. The scary part comes when somebody says it generated the design, and now it’s going to send it to manufacturing. Who checked it? How did they check it?
Petr: I would argue that in our industry we’re best suited to address this problem, because part of our whole design for verification has always included calibration. Verification calibration is also done in the real world. Measurement people run tests to make sure that what they have simulated is actually true. None of this will go away. We will just make this more efficient. But the final sign-off is still a tool which basically enforces that. Even if hallucination happens somewhere in the process, which I believe we’re most scared about, we have checks and measures in place that are well established throughout the decades of semiconductors. That really puts us in a unique position to use this technology and drive more and more automation in certain areas.
Rangarajan: As an industry, we should have the right set of expectations in the short term for what this is going to do. My fear is there’s going to be a very high expectation of what, in the short term, generative AI technology is going to give me. If customers say they have a spec written in this language, can I now translate it to what every tool in the flow needs as scripts? Eventually, if my engineer who has only a couple of years of experience can push a button to get there, you need to ask yourself if this is actually your spec? As long as we know that we need a more conscientious approach for deciding what are those small tasks that we can reliably have the large language model assist us on, and we have enough checks in place, then we want to make sure it’s not a situation such as when someone goes to ChatGPT and gets an answer and they don’t know where it’s coming from. Instead, if they go to a provider of the tool, they do know where it comes from. We need to start working on putting that data together that will help us put those building blocks in place, while understanding that it’s still an assistive technology.
Petr: The key idea here is that it’s still going to be a copilot. It’s not going to drive the decisions right now.
Ziai: What little exposure I’ve had in the past few years to AI is on the data. There certainly are a lot of things where the more data you have, and the better we get with the training and the hardware and the inference machinery, the better all of this will be incrementally. There are certainly concerns about wrong answers, fake news, hallucinations, whatever you want to call it. Of course, humans make those mistakes too, sometimes due to incompetence, sometimes due to having agendas, so the machines will make those mistakes. The thing that I don’t know is, maybe 100 years from now, maybe 20 years from now, would AI have come up with Maxwell’s equations, Gauss’ law, Faraday’s Law, the Einstein relationship? If you gave 200 years of chemistry to chat machines, would they have come up with what Mendeleev came up with in the periodic table of elements? There’s something that humans still seem to do, which I’m not saying will never be in the grasp of AI, but in the next 10 or 20 years it’s not going to be in the grasp of AI. We need to separate what is it that brings meaningful and fundamental structure, and what could maybe come in the very far future. In the short term, let’s say I just downloaded millions of dollars of CAD tools from some company, and there’s millions of pages of documentation. Yes, I need a chatbot or whatever. I don’t know what documents to go to in order to answer the questions about how to do more modeling when the mesh has to be like this or like that. We need to think about that spectrum of, what do we have that humans do, versus looking up a bunch of documents. And maybe I get 90% of my answers, and then at some point I have to call the expert and ask which document I really need to look up to get the answer to this very important and critical question.
Petr: That’s getting toward the question of how does creativity work? If you look what we do with neural networks, we basically connect neurons. The big question is, how does the human brain work? There are neural scientists trying to explore this now by trying to mimic this capability. They’re taking smaller brains from flies or mice and trying to duplicate how their sensing works and how decision-making works. Our current neural networks don’t reason. They don’t have this creativity. There’s a very simple experiment you can do. Take two documents, feed them into ChatGPT and ask it to come up with a solution that is a crossover between those two things. It can’t do it. It can read the documents, and it can kind of parrot what it read, but it cannot connect the details and the logic. Context is still one of the biggest questions. How does our brain work? You look at different human beings, their environment, whatever they learned, their thought process. Do they use the left brain more than the right brain? It’s getting really philosophical at this point. Maybe we will have even bigger neural networks that can mimic, at some point, creativity. Who knows?
Rangarajan: To get to the point where generative AI can incorporate two different concepts, we probably need more domain-specific models that experts would create, saying these are the most important parameters you need to be looking at. And maybe that, along with these chat agents, can help provide a check and balance of the output and deciding what are the options. But coming up with what we need to change in order to get to those particular metrics, it’s not there today. We need to build that at the end. That’s where the experts are going.
Leave a Reply