AI: Engineering Tool Or Threat To Jobs?

AI works better sometimes than others; what happens when there isn’t enough good data?

popularity

Semiconductor Engineering sat down to talk about using AI for designing and testing complex chips with Michael Jackson, corporate vice president for R&D at Cadence; Joel Sumner, vice president of semiconductor and electronics engineering at National Instruments; Grace Yu, product and engineering manager at Meta; David Pan, professor in the Department of Electrical and Computer Engineering at the University of Texas at Austin. What follows are excerpts of that conversation, which were held in front of a live audience at DesignCon.

SE: Will AI ever really replace engineers, or will it be a tool for them?

Jackson: AI is really more of a tool for engineers to use. In the design automation space we’ve been creating tools for many years. AI is part of the new breed. We are seeing a lot of research and a lot of development. There are new products coming out, and it’s a way for our users to get to better results faster. It won’t eliminate the workforce, but it will make them more productive.

Sumner: It’s inevitable that our jobs will change in some way. Nobody who’s been in this industry for any length of time is doing their job in exactly the same way they’ve done it in the past. AI is going to change the way we work. And it has to, because we’re ambitious. We all have some grand thing that we can’t accomplish today. AI is going to make our lives easier or faster by eliminating some things that are getting in the way. Everyone has either been asked, or gone to their boss to request more resources and help to get something done. But the reality of the world we live in is we cannot. That’s a limiter. AI will unlock that.

Pan: AI will be a positive. It’s very useful to optimize, predict, and maybe even generate. Some of the lower-level, mundane jobs that are not so creative could be replaced by AI. If you look at the four industry revolutions, from the first industrial revolution with coal and steam, the second with electricity, the third with computers, and now the fourth, in each of those some jobs or workers or engineers were replaced. But new kinds of engineers will be created, which will improve our productivity or make better designs.

Yu: We see this with PCBs. The amount of work you have to do in order to manually review millions of connections is huge. How do you make sure everything is connected properly? We see that with EDA tools, where they can improve the automation of the design and check the validation. But AI can do better than just simple, repetitive tasks. You can train the models to capture the same pattern, and then they can do the some of the intelligent work for you. But that depends upon the quality of your AI model. Then you can utilize AI more, and you can free up your engineers to do more creative work that AI cannot do.

SE: Can AI do debug or verification of a complex chip? When we hear AI, we tend to think of something out of a science fiction movie, but we’re nowhere near that.

Pan: It can do mundane tasks today. But with a lot of training data — and supervised, unsupervised, semi-supervised, active learning, transfer learning — it can start to do something pretty intelligent, as well. That’s not to say it will be as intelligent as a top engineer. But it can be pretty good for some applications, though not necessarily all.

Jackson: It absolutely can do more than mundane tasks. Products are being released that enable it, in the area of digital IC optimization, for example, to achieve a better result than they would without that technology. Maybe they save 10% more power or they get to a result a month faster than they would without it. That’s not mundane. That’s real productivity.

Sumner: The promise we see is the ability to give an engineer a hint of where to look, especially on the debug side, because those things tend to be tricky. We’re already deploying technology like that. There’s a major semiconductor vendor we’re working with, where we were doing automation around a root cause to search for patterns of data coming in. That’s really well accepted, because there’s a human in the loop, and it’s accelerating what they can do. I look forward to the day we’re no longer sitting in a room staring at a plot and having to find a problem.

SE: There are a lot of custom designs being done today. How does that affect AI, because there isn’t much data for those designs.

Yu: For some of the custom designs we’re doing now, we don’t have a pre-existing data set, and we are changing the model so it can utilize existing data. Maybe in a couple years, when we collect more data and the data is more accurate, we can present new technology and apply AI to that.

Jackson: There is a great set of training data this helps formulate the statistical models that are used as part of machine learning. The framework can be common so that company A and company B will be able to tailor and customize a statistical model based on their local data.

Pan: I agree. We need application-specific AI for different applications and different customers, so you probably have to migrate one model to another application and slightly change it. Maybe there’s a common framework that can still be used, and to which we apply transfer learning or something like that.

SE: The processes in different leading-edge fabs are becoming very different. How do you develop AI tools that can work for each of them?

Sumner: We have two areas of research here. One is around taking the base parts of variance and being able to train the model on the things that are common, and then allowing it to learn about the things that are different. One of the things we see in ML models is that you need a good amount of data. And it’s not just the data itself having a record, but also having content that was tagged. Is it good or is it bad? And if it’s bad, where is it bad? That allows the algorithms to move forward. But when you have small data sets, and at least a little bit of commonality, you can tell it what’s the base and what’s common around it in order to allow it to do its job.

SE: If some engineers are imperfect, and the algorithms they create are imperfect, does that mean AI will be imperfect? And if so, how do we get around that?

Jackson: AI is not necessarily perfect, but it does learn. A great example is the AlphaGo work that was done by DeepMind. It can play go ay Go better than human, so it’s much better than the developers who designed it. The trick was really to create something that learns, and to allow it to do that and achieve its full potential.

Pan: There is a related example with MAGICAL, which is sponsored by DARPA. It’s a fully automated analog IC layout system that considers all kinds of constraint generation and placement and routing. The advantage of this is that without a human in the loop, we can generate tens of thousands of different kinds of layouts and then do automatically extraction and simulation. We can generate all kinds of weird layouts that still satisfy our design constraints. But you also can explore other solutions that human designers maybe are interested in, but which they’re not exposed to. So like in the game AlphaGo, where AI can do things better in fewer moves, you can do the same thing here. And while engineers are definitely imperfect, this process iteratively improves, allowing you to do things you never thought about.

Jackson: It was move 37 of Game Two, and the people who were watching said they didn’t understand it and thought it was a mistake. But it led to a crushing defeat.

SE: So part of this depends on how much data you have, both good and bad?

Sumner: Yes, and especially with people using AI for their own specific problem, the question is where does the data come from? Most people have a lot of data just on their own machines. But this is going to change the way we think about storing data. For you to do your job quickly, and for the algorithms to do their job quickly — and for you to experiment with what’s going to work specifically for you — that data has to be stored in a way that can be easily accessed, and it has to be tagged in the right way. A hard drive full of Excel files is going to make that challenging to the point that you may not think to do the experiment because it will take so long to get the data together. Having a platform where you can put the data in before you know you need it, and then have some tools that help you extract it quickly, is really the key to this experimentation and progress.

Yu: Some of the work we are doing now in the lab at Meta is pioneering work. There are a lot of challenges, not only from the design point of view, but from the industry as a whole in terms of how to utilize the Metaverse. It’s not only for gaming. We want to apply it to a lot of areas that are still unknown to us at this point, and there’s some uncertainty about the AI. There is a lot of research. We have tons of data to train the AI so that they can recognize the underlying patterns that are noise in the data. There is a lot more work to do and challenges ahead of us. We also can utilize existing AI models in other industries to determine which is the best approach and expand that into new areas for developers.

Related Reading
AI Becoming More Prominent In Chip Design (part 2 of 3 of above discussion)
Experts at the Table: The good and bad of more data, and how AI can leverage that data to optimize designs and improve reliability.
How Chip Engineers Plan To Use AI (part 3 of 3 of above discussion)
Checks, balances, and unknowns for AI/ML in semiconductor design.



Leave a Reply


(Note: This name will be displayed publicly)