Increasing design complexity, AI, and geopolitics make it more difficult to share data; open APIs can help.
Semiconductor Engineering sat down to talk about more openness in EDA data, how increased complexity is affecting time to working silicon, and the impact of geopolitics, with Joseph Sawicki, executive vice president for IC EDA at Siemens Digital Industries Software; John Kibarian, president and CEO of PDF Solutions; John Lee, general manager and vice president of Ansys’ Semiconductor Business Unit; Niels Faché, vice president and general manager of PathWave Software Solutions at Keysight; Dean Drako, president and CEO of IC Manage; Simon Segars, former CEO of Arm and board director at Vodafone; and Prakash Narain, president and CEO of Real Intent. This is the final of three parts of that conversation, which was held in front of a live audience at the ESD Alliance annual meeting. Part one of this discussion is here. Part two is here.
SE: Are we making progress in verification, particularly as chips become more complex? It’s still 70% to 75% of the time it takes to get a design out the door.
Narain: Absolutely it is getting better. My complaint to customers is, ‘We doubled whatever needed to be doubled, but we have to run faster to stay in the same place.’ Everything is improving all the time. But we have all kinds of different components and economics all running together at the same time.
Faché: I agree it’s getting better, but there’s still a lot of room for improvement. There’s a lot of activity around design, simulation, virtual prototyping, verification, and test, but these domains are still not well connected. There is an opportunity to have a verification workflow from design and test. So you collect simulation data and look at the correlation, and you learn from that comparison. Then you improve the design and get it right. But with 70% of the time spent on a lot of manual steps using homegrown tools, we have opportunities to automate that process all the way from data storage. So the data is collected automatically, it’s tagged and stored, so you can get the right analytics, the right insights, and you can update the design and then hopefully get it the second time.
SE: One of the ways forward has always been more and better models. Can we build those models at the same rate and level of completeness as in the past, given that we now have gaps in the flow?
Lee: If you look at what we’ve done with chiplets, you need a thermal model when you start stacking die. There are some obvious techniques that are out there, but there’s a lot of rich opportunities to apply rigorous mathematical techniques to reduced-order models. The models are in good shape. It’s the challenge of doubling complexity to continue providing richness from an R&D standpoint. But I feel like we have all the mathematical tools. If you throw in some AI/ML, it helps us cover gaps.
Faché: A good example of that is how Synopsys, Ansys, and Keysight are working together to bring the best tools into an integrated flow, and then validate that against measurements. We have proof points that if we bring the right tools together, we can integrate them into a flow and get very predictive simulations.
Lee: It’s important to note that all the problems we’re talking about today can’t be solved by one method. It takes a village to solve these problems, which go from design all the way through to manufacturing. This idea of open and extensible platforms is a form of modeling. If you have simulation as a service, you need to know ‘this’ temperature and you need to know something about magnetics. And it cannot be a closed system. You need a system that can talk to other systems. You can talk about reference flows, but systems talking to each other is the future. We have to embrace openness. Then we can solve more problems.
SE: There’s more stuff moving left in the flow than there ever has been. What’s the impact of that on the design side?
Sawicki: It’s harder, for sure, anytime you take on these new things. What generally happens is something blows up, which generates a new set of rules, a new set of capabilities, and a desire to make sure that doesn’t blow up in the fab — which is painful to deal with. But a chip dying in a fab, where you’ve got to do a re-spin, that’s six months out your marketing. There have been problems that occur because you have to bring more of that foundry awareness into design cycles to optimize these things.
Segars: A chip blowing up in the fab is bad, but if it blows up in the field it’s even worse. As the process gets more complicated, or there are more physics issues, the EDA community is focusing on these problems, and that’s a great thing. The promise of AI-like techniques is that you can optimize across a broader set of steps in the design process. The reason there’s so much margin is because you don’t want these bad things to happen, but the margin stacks up and you end up losing performance. So it’s all about minimizing risk and lowering the cost of failure. If you can optimize it across a broader set, you can squeeze those margins. That, in itself, holds a lot of promise. But it does require being able to absorb bigger datasets and look at different views of the same thing, trading off one thing for another to come up with better efficiency.
Drako: Almost all of our customers are using the cloud for some aspect of the design, and a number are doing pure cloud implementations. All the EDA tools accept the file system, because that’s the way we wrote all the software, but the cloud doesn’t really provide a good file system. The perfect cloud generally provides a good object store. It’s a completely different beast. The translation from an object store or file system is somewhat problematic. There are tools and file systems to do it, but they have significant performance hits, and so the tools run slower. The bigger problem, though, is that the datasets are very large. And even in a local environment, where the customer has 5,000 or 10,000 servers in their own data center and they need to access the data and distribute it for simulation, it’s a significant issue. All the servers need data from one NFS server because they want one golden copy. But because the data sizes are so large, they need 500 servers to run the regression tests and see where they are. So distribution of data continues to be a significant problem. And then when you say, ‘We’re going to be a cloud data center, and we’re going to run some stuff in Amazon to do this AI, and run some stuff over here for this AI, and we’ve got some design engineers doing verification — data is being shifted all over. And suddenly you’re in the midst of a massive nightmare, because nobody knows which data is being used by whom, and the job grinds to a crawl because they’re trying to transfer data between India and the U.S. in order to launch this job, and they need 72 gigabytes of data now, but it takes three hours to transfer, so everything grinds to a halt. There’s this huge data problem, and it’s getting worse and worse and worse. And it’s going to get even worse as we do AI components. I don’t think this industry understands the potential and the impact of AI on it.
SE: How do you see that playing out?
Drako: Very little of the discussion here has been about generative AI and what it’s going to do for us. There’s a whole new category of tools coming. Everyone’s working on them. But I’m not sure the industry really grasps the power of what’s going to be possible. You’re going to be able to write a a spec for a chip and get pretty close to getting a chiplet out of it. I can go to GBT and write a spec for patterns, and it’ll spit it out a 20-page patent for me. Is it perfect? No. But when you take 20 minutes to clean up, it’s fine. So design is going to get easier and more prolific. The business model isn’t there for us to support large numbers of designs right now. Most of those designs may go to FPGAs or whatnot. But the impact on what we do and how we do it is going to be huge. And in order to make that possible, the amount of data is going to go up, up, up. Training data is really big. And so we’re putting a lot of energy into how we manage that data and distribute it.
Lee: About 90% of EDA software that’s used today is based on 1980s computer science. That’s a problem. You rely on having a central net out and storing the data. But if you talk to any computer science student today, they’re using systems and methods that are waiting on what 90% of the people are using. There’s a lot of opportunity on the outside. ChatGPT is a good example, and it’s certainly something all of us are actively looking at. But there’s an education problem also. We don’t have no students coming into EDA in the U.S. And in China, there are a ton of smart engineers, but because geopolitics is spinning up our own industry and that’s an existential threat long term to the West, we may be stuck in 1980s, computer science.
Drako: We need we need more investment in any and all types of education. In the 1900s, the U.S. led the world in education. Today, we no longer lead the world in education, and that’s going to have a significant negative impact on us. We need to fix that.
Audience question: Is this an opportunity to democratize EDA, so it can be opened up to large numbers of talented people who currently aren’t interested or don’t have access?
Lee: Oftentimes we assume you need a Ph.D. out of Berkeley, but the modern way of learning may not require that. There’s great value in open source, but the core algorithms can be hard to create. Can you have high-production, high-quality EDA software that comes out a university, like Open Road? I’m skeptical of that. What I’d rather advocate is for the API for our tools to be open, with a version that’s accessible to students everywhere.
Kibarian: The U.S. government was funding open-source EDA while our adversaries are trying to win. You don’t want a million startup companies, and you don’t want your source code out there to let them build stuff with it. That seems crazy. By the same token, walled gardens are not a valuable way for us to work, either. There are definitely ways to address the access and cost. We have done a freemium personal product, and China has experimented with freemium. And that does democratize tools. But there are lots of ways to do that without exposing source code. If you wall off a region of the world, that region will figure out how to do something on its own. And then, when you re-integrate, it’s going to be a competitive threat. So let’s not make it easy to catch up.
Faché: The open API is critical, and that’s what most of us are working on. You can protect your IP, but you also can make it much more accessible and easy for a very large community of people to add value to it.
Narain: Our job is to make sure that chips eventually succeed, with no manufacturing problems and functional problems. But the data is very private. So when we talk about data, everybody has their own view of things. The challenge is how to pull all the data — whether it’s at the design stage, synthesis stage, place-and-route, manufacturing, or silicon — and make the problem more viewable? We can solve this problem much earlier as a community than individually.
Leave a Reply