AI Won’t Replace Subject Matter Experts

But it could help with mundane tasks, freeing up designers to focus on more intricate problems.

popularity

Experts at The Table: The emergence of LLMs and other forms of AI has sent ripples through a number of industries, raising fears that many jobs could be on the chopping block, to be replaced by automation. Whether that’s the case in semiconductors, where machine learning has become an integral part of the design process, remains to be seen. Semiconductor Engineering sat down with a panel of experts, which included Rod Metcalfe, product management group director at Cadence; Syrus Ziai, vice-president of engineering at Eliyan; Alexander Petr, a senior director at Keysight; David Wiens, product marketing manager at Siemens EDA; and Geetha Rangarajan, director of product management for generative AI at Synopsys to talk about how AI will affect the day-to-day of workflow. What follows are excerpts of that discussion. Part one of this discussion can be found here.


L-R: Cadence’s Metcalfe, Eliyan’s Ziai, Keysight’s Petr, Siemens’ Wiens, Synopsys’ Rangarajan.

SE: Is AI is enabling engineers to become more generalist, rather than focusing on individual niches? And will we still need subject matter experts?

Metcalfe: I have seen AI help junior engineers behave like more experienced designers. As we all know, it takes many years of designer experience to get the best out of design data and EDA tools. Over time, senior engineers learn what works, and what does not, and then impart that experience to more junior engineers through design reviews and other communication. Now, when a junior engineer starts using AI systems to help with chip implementation optimization, the AI system automatically adds some of that senior engineering knowledge to the flow. This enables junior engineers to achieve better results more quickly and gain experience so the senior engineers can focus on bigger tasks. AI is really helping get the best out of all engineering experience levels. We still need subject matter experts, but they should be working on the more complex design tasks.

Wiens: The domains have been shifting consistently. AI may push it a little bit harder, but at least on the systems side, domains have been shifting for a long time — and I would assume in the IC world, as well, where somebody who was responsible for layout now has to understand manufacturability much better. They have to understand signal integrity. What does that enable? There could be a shift. It could solve some of the problems through automation, by removing mundane tasks. The question is, ‘Does AI ultimately replace any of those people?’ We’re talking about shifting and roles and augmenting roles, but does it ultimately replace people? There aren’t enough people today to solve the problem. Engineers who have seats should not feel terribly threatened. They should be getting paid better and better, because there are fewer and fewer of them. And so it is more about just augmenting them, as opposed to replacing them.

Petr: We hear this from customers, and it’s a very simple problem. ‘How can I do X with the same amount of people in less time and with higher quality?’ It’s just a massive amount of productivity gain, which will be injected into this whole market.

Rangarajan: The big challenge is how to get the younger talent to work a little more independently while the experts are focusing on capturing these requirements? If the specs are defined right, then you can have the person who’s talking to these virtual assistants have some guidance on what they need to ask. Today, all of that knowledge about what they need to ask is in the head of multiple engineers. In the world of system design, it’s blurring. It’s happening in digital design and analog design, as well. People cannot just focus on silos anymore. If you’re doing an implementation of a block, you need to know the sign-off requirement. I don’t think there are specs today that are capturing all of that. You have to be able to ask these questions. At some level, you need to start thinking, ‘Can I free my experts who are looking at end-to end-chip design to capture all the tradeoffs, and then provide that as guidance?’

Petr: The solutions we are designing are becoming more and more complex. With every layer of complexity, we are basically adding new requirements, new tools. We’re looking into massive amounts of integration. Nowadays, we’re not just talking about electrical engineering. It’s mechanical engineering, as well. There are questions such as, ‘Can you even put this in a chassis? What happens if you drop things?’ Performance becomes a significant question, depending on what the end product is. If you shoot a satellite into space, this thing is going to sit there for 30 years. It’s unlikely that you’re going to touch this ever again, and we’re trying to squeeze more and more performance out of the technologies we have. In the past, we built massive amounts of margin into it, which was why you could have that segregation between the domain knowledge experts. Now, you’re trying to optimize the last piece out of it, meaning you can’t just give a certain engineer a certain budget. You have to optimize the budget across the whole solution.

Ziai: In terms of multi-disciplines, clearly electronic design is becoming more complex — and I’m using that as a very generic term. I want to go to some first principles. I started working in 1988, and in the early ’90s we did on-chip inductance extraction. There were thermal density issues of around half a watt per millimeter squared. That was the limit of what was practical from a system point of view to dissipate heat. We had to dumb down our circuits, to actually slow them down to reduce the thermal density. We had multi-voltage domains. We started doing C for chip designs in the 1993 to 1994 timeframe. We had significant improvements in power integrity. But at the same time, we had challenges because we were running things faster. In my generation, we used to do everything from RTL to layout. There is a trend that’s going to happen going forward, where there’s going to be more electronics engineers, but also more complexity. So there’s going to be a stratification where you get generalists and you get specialists, and this is where AI can help. Just like the advent of the internet, we all had data sheets on our shelves. When the internet came, all those data sheets were put online. I didn’t have to carry those on my shelf anymore, and it increased the speed at which I could look things up. The same is going to happen in the world going forward. AI will accelerate. We’re going to have people who are very broad and quite deep, and we’re going to have people who are highly specialized in specific areas. AI will help both of those buckets of people.

Wiens: I actually see both of those combined. You can envision somebody who’s a specialist in one domain but a generalist in another. Say they’re a specialist in layout, but they’re not in signal integrity. Yet they have to run signal integrity analysis, or they have to run manufacturability analysis. In those domains, they’re acting more as a generalist. They don’t really have as much information in front of them. Simplifying their experience becomes a huge deal. For the expert, I’ve been surprised before about people who say that our tools are easy to use out of the box, just because, wow, you think that’s easy to use? Well, these guys have been using it for a decade. They do think it’s easy to use, because they know where to push all the buttons. But you get somebody who’s just accessing the tool maybe once a month. They’re a generalist in that area and domain. They need all the assistance they can get to accelerate their process.

Rangarajan: It’s not just that it’s going to help the people who are getting started in a domain. Take an expert who’s been wanting to shift left into another domain. This AI technology becomes a huge enabler for them. We actually heard this from customers who said, ‘Oh, I have always been doing sign-off and have been wanting to go into implementation, and I’ve been a little concerned that it’s too complex. Now, having AI assistance to help me navigate the world of this technology, and help me understand what I need to look for, is a huge boost.’ It also helps people in the hardware domain. People get fragmented into being specialists in that domain, because it’s very complex. They do the same thing over and over again. With the advent of AI, it’s going to allow people to also move in the direction of being a specialist, but they can now offload to someone who’s a specialist in the other domain and become their buddy. We’re seeing that happen, already.

SE: LLMs and generative AI get most of the attention. But what about machine learning? It’s being incorporated into a lot of tools. How do you see that evolving?

Metcalfe: LLMs are starting to help now in terms of EDA support. I have seen examples of LLM-enabled EDA support systems helping engineers quickly understand tool behavior. Rather than having to read pages and pages of user guides to figure out how to get the tool to do what you want, the LLM will compile a simple-to-understand summary of all the EDA documentation related to the question. This saves engineering time right now and is already useful. I am sure LLMs will play an important part in future EDA flows, but they are already helping engineers now.

Rangarajan: You still need those technologies, but in the past they were talked about a lot. Now they’re going to be baked in. We could even use some of this data extracted from the tools to help you leverage LLMs, but that technology still needs to continue to build the core competency of the tool. For example, if you want an AI assistant to help you understand, can this be done in five minutes? I don’t think it’s the LLM that’s going to help you do that. You need some predictive capability in the tool that can quickly say, ‘Based on my prior runs and prior data for this process node and this design, if you do this, you should be able to get this delay down from x nanoseconds to 50% of that. You need those models. You need that information in the tools to be able to get faster.

Wiens: With the LLMs, the ability to communicate via something other than a keyboard and mouse comes into play. The large language models delivered directly from Amazon or Microsoft, where they don’t necessarily need to be trained much, are providing a different communication mechanism that’s faster than typing or moving your mouse. That’s an infusion kind of thing. You can also look at as machine learning as something where it’s baked in everywhere. The more valuable it becomes, the more it needs to be trained to achieve that value. One of the things that we haven’t talked about is, how do you train those models? We almost see the opposite of democratization in the sense that, for a very complex problem, you want to train it on a whole bunch of prior design data. The only people who have that design data historically are large global enterprises, not the small startup. You’re out of luck unless you’re leveraging models that are pulled from open source, and good luck with the quality on that. So there is a stratification that may happen as these models get more and more complex.

Petr: That’s an interesting point. If you look at where the governments and the industry want to move, there’s a push toward pay open source. I don’t think there’s currently a way forward to create an open-source database because there are companies that make billions of dollars with IP. I have I heard requests from people asking, ‘Why can’t you train on all this IP?’ Once I have an LLM that is trained on that IP, it’s not IP anymore. You could literally ask the system, ‘How do I do this,’ and it’s going to do it for you. Our copyright, our proprietary information, goes out of the window. What’s the incentive for a company to actually offer this IP to train an LLM that the competition gets access to? That doesn’t make sense. Startups and universities keep asking for this kind of solution, and frankly speaking, I don’t think that the companies that really make money with us will be willing to entertain that idea.

Ziai: This is a topic for the core EDA technology companies to answer better, as you have all these optimization methods, from convex optimization to genetic algorithms. I’ve been around them a lot, and this is a very difficult question to answer. There is a sense of unease that I have, in general, for the state of our industry, which is that as we explore these various methods, the reliability or authoritativeness and correctness of the answers that are provided with these various tools is unknown for us, as an industry, in the short term. Now, over time, we’ll develop a better sense, and we’ll say, ‘Okay, certain problems need absolutely closed-form solutions. With certain problems, you don’t need closed-form solutions, as long as whoever does it better gets a better product.’ The better the answer, the better the final PPA correctness of the product. This is something that the core CAD companies are better equipped to answer in the short term.



Leave a Reply


(Note: This name will be displayed publicly)