Chip Design CEO Outlook

Challenges and opportunities involving heterogeneous integration, geopolitics, and AI.

popularity

Semiconductor Engineering sat down with Joseph Sawicki, executive vice president for IC EDA at Siemens Digital Industries Software; John Kibarian, president and CEO of PDF Solutions; John Lee, general manager and vice president of Ansys’ Semiconductor Business Unit; Niels Faché, vice president and general manager of PathWave Software Solutions at Keysight; Dean Drako, president and CEO of IC Manage; Simon Segars, former CEO of Arm and board director at Vodafone; and Prakash Narain, president and CEO of Real Intent. This is the first of three parts of that conversation, which was held in front of a live audience at the ESD Alliance annual meeting.

SE: What are the biggest problems facing the chip design industry, and where are the biggest opportunities?

Lee: Multi-die, heterogeneous 3D-IC systems are the greatest opportunity and the greatest challenge. There’s also a big challenge with China, especially for EDA. There are a lot of startups out there, and our ability to sell to China has been challenging.

Segars: The downturn that’s happening at the moment is a bit of a challenge, but there is an opportunity to keep investing through that because of some of the new technologies that are out there. That creates this playground for creating stuff. But to do that is going to require tools and flows and methodologies that are way more complex than what we have now, and it’s going to require a lot of R&D. So my request is, ‘Please don’t skimp on the R&D because we’re going to need it.’

Drako: The largest thing in front of us, by far, is the AI impact on the industry, and on the industries we serve. AI in the automotive industry, and AI in the video surveillance industry, will drive tremendous silicon consumption. And AI in the EDA industry is going to be huge. We’re going to be developing all of those chips, and we’re going to be consumers of AI to make the chips, and use tools to make those AI chips. That thirst for AI is going to drive a move to the cloud faster than the EDA industry and our customers have been wanting to go, because they’re going to be hungering for GPUs — and systems to deploy large numbers of those — to do all the compute to basically AI route or AI design chips. And all of that AI needs a massive amount of data. The chip designs we design today are terabytes of data. Once we kick in the AI component, it’s going to be multi-terabytes of data. That’s going to create this crazy hard problem, which we will tackle and solve. But when you start doing that, you end up with large amounts of data that’s on-premise and in the cloud, and you need some of it here and some of it there. Maybe you can do it cheaper on Google, but your data is still on Amazon, and so forth. So there’s going to be this huge issue of moving the data around. And there’s an opportunity to create tools that can project your data and make it appear places that it’s not, but make it useful and fast in those places. AI is an opportunity and a challenge, and the data management to deliver that AI is going to be a huge challenge for us to overcome.

Sawicki: We don’t suffer from a lack of challenges right now. After decades of our government not knowing what a semiconductor was, all of a sudden they find it to be the most interesting topic they can imagine working on. That’s a challenge for us. But we’re in this just absolutely stunning place where we will be able to monetize what we’re doing, as well as to really transform the world. It’s an incredibly exciting time, and a scary one.

Faché: One of the challenges we see is that systems such as communication networks, cloud infrastructure, and electric vehicles are more and more complex to design, test, and build because of the performance requirements, new types of functionality, and new technologies. As a result, product development teams, and the entire supply chain of any system, are putting much more emphasis on virtual prototyping. They are really making this shift left in the product development lifecycle so they can deal with the complexity of system, sub-system, and components up front, and in products. And they also can accelerate time-to-market to improve productivity, reduce costs, and risks. To make that shift happen, we need to digitize engineering workflows, from requirements all the way to the point where you can manufacture a product and get it to meet specs. Getting these digital transformations is a great opportunity for our industry. It’s going to take an open ecosystem with connected design, simulation, and test tools, and intelligent workflow automation.

Narain: The workhorses for design are verification and simulation, which are the most widely deployed, and after that are formal and static sign-off, which is where we are. The biggest opportunity for static sign-off is shift left in verification. That’s the earliest possible verification of designs. If you define it that way as a design step, then the designer has to get involved. So there’s a lot of pressure on their time to be successful with these applications to help them create the best possible user experience. There are tremendous opportunities in the design flow to enable shift left, and these applications need to be very timely and very efficient. The challenge is economic innovation. We are continuing to invest in technology and innovation to design at the speeds that are necessary. And we continue to expand coverage of static sign-off through products and through advancements in technology.

Kibarian: Manufacturing has been about benefiting from the next node — Moore’s Law, Dennard scaling, along with the benefit of having all of the manufacturing concentrated at a small number of super excellent manufacturers, all on the front end in the wafer fab, and all in a very controlled way. Now, because there’s no more Dennard scaling — and hasn’t been for a decade — and with Moore’s law is slowing down, we need to have much more heterogeneous systems in advanced packages. That creates manufacturing challenges because the value isn’t just in the wafer fab. The assembly is quite a challenging process now, and the test points are much more complex. So there are opportunities for improving yield in production flows. In addition, we’re using a variety of silicon, and we will start using more on-chip silicon photonics and other technologies that get you higher yields and that continue to get you more performance per watt per dollar, irrespective of what we do with Dennard scaling or Moore’s Law. That creates a tremendous challenge in the manufacturing space. On top of that, because of geopolitical reasons, we are now disaggregating the supply chain and moving it around. I cringe every time I watch Morris Chang talk about how Americans can’t manufacture, and the CHIPS Act is a waste of $52 billion. If you look at historical data, he’s right. But when EDA folks look at a problem, it was like when Google looked at the advertising industry. They didn’t just own a bunch of bots that went around selling ads. They took a different approach. And there’s a much different approach we can bring to manufacturing. That will mean software and EDA will be an inflection point.

SE: Where do you see AI in design going forward? Who will be using it? Will we get better results out of this? And what are the issues you have to wrestle with going forward?

Segars: AI can help accelerate productivity a lot. Probably everyone has played with ChatGPT, typed in something, and been very surprised at the output. You can ask it to work in all sorts of different languages and what it produces is pretty good. Interestingly, in GitHub, they’ve introduced this code popup, which is Microsoft’s integration, and the demos look very cool. There are productivity gains you get from putting a lot of your code together and not having errors. How many of us have beaten our heads against our screens looking for the missing close bracket, which means the whole thing doesn’t work. All of that sort of stuff may be just a thing of the past with a lot of auto-generated code. In the short term, there’s a lot of automation that can just help accelerate getting to the point where you really have to apply your brain power. And in the longer term, it’s just going to be more and impressive. On the other side, it will be useful for verification — just producing test cases and understanding how flawed test cases are. This is something that we started doing a long time ago. Using machine learning to understand what’s good, what’s bad, in terms of verification will save you a lot of time, cut down compute cycles, and get you to the finish line faster. That’s going to be quite revolutionary.

Drako: We all used the term expert systems 10, 20, or 30 years ago. It fell out of use. But we bring in large teams to do verification, and those teams consist of 5 or 10 or 100 engineers who have experience in doing this, and another 5, 10, or 100 engineers who are less experienced or less talented. And we train them in how to write test cases. You may not need that with a ChatGPT verification-equivalent tool provided by Cadence or Synopsys, or whoever creates that expert system. You may be able to generate large numbers of test cases, and maybe 10% of them are off a little bit and you have to fix those. But the productivity gains will make a huge difference. When they originally envisioned AI, people thought it would be a threat and take their jobs, and what immediately came to mind was blue-collar workers. That’s completely wrong. It’s the white-collar job that’s going be threatened by AI. I don’t think I need a patent attorney anymore. I just describe my path to ChatGPT and it generates a pretty decent pattern. I don’t need that go spend $20,000 on that legal firm. It’s an opportunity for productivity for white collar jobs to leverage all of that AI in so many ways.

Sawicki: If you think about the next eight years, Moore’s Law is slowing down, but integration is still proceeding apace. If you look at the breadth of the solutions that we will have to put together, and then factor in a 2X improvement every two or three years, and the stack is getting bigger. So you end up with these huge system-level contexts where it’s not just an AI chip doing something locally. It’s putting together a whole system — factory automation, driving a car, and they’re all operating together. The verification details are immense. Can you imagine us trying to get that done with the current state of education in the U.S. and the number of engineers we’re graduating with the existing number of resources in the next eight years? It’s never going to happen. Thank God we’re in a place where we can take advantage of the opportunities in front of us. In terms of chip design, what if generative AI comes into the design space? How would it be useful? How would it be innovative? How would it discover what’s been done? What is the interesting connectivity? What’s being pulled together in really compelling ways? You still need to have that layer on top. What can we do to innovate on that baseline we need to get done with tools? It’s going to be an amazing ride in terms of how we take advantage of these opportunities.

SE: We are dealing with a lot more data than we ever were before. Is that data good? And how do we know it’s good?

Kibarian: One of the big opportunities for AI and ML is to improve the data you’re operating with. We see that happening more and more. Tools are helping to make sure the data that’s coming off the machines that are producing software, the testing machines, is consistent. That will be rolled out and people will use that more over the next 10 years. It’s not that far away.

Lee: There’s a really interesting body of work, where you take data in, but you also can augment that data. And so a digital twin of manufacturing equipment will have everything you submit, including some modeling errors. But combining that with data coming in from manufacturing is a great way to have practical self-correction.

Drako: The data for AI, though, is actually going to be very problematic, because each design house wants to controls it and doesn’t want to put it anywhere else. They’re beyond careful, beyond paranoid, beyond controlling. And so there is going to be a huge challenge for the EDA industry to build the AI caliber tools that the design industry wants, without having access to the data or being able to get that data to train the tools and the neural networks.



Leave a Reply


(Note: This name will be displayed publicly)