As chip complexity rises with disaggregation and chiplets, the design process will become increasingly more workflow- and workload-specific.
Experts At The Table: EDA has undergone numerous workflow changes over time. Different skill sets have come into play over the years, and at times this changed the definition of what it means to design at the system level. Semiconductor Engineering sat down to discuss what this means for designers today, and what the impact will be in the future, with Michal Siwinski, chief marketing officer at Arteris; Chris Mueth, new opportunities business manager at Keysight; Neil Hand, director of marketing at Siemens EDA; Dirk Seynhaeve, vice president of business development at Sigasi; and Frank Schirrmeister, executive director, strategic programs, systems solutions at Synopsys. What follows are excerpts of that discussion, which was held at the Design Automation Conference.
L-R: Keysight’s Mueth, Siemens’ Hand, Sigasi’s Seynhaeve, Synopsys’ Schirrmeister, Arteris’ Siwinski. Source: Jesse Allen/Semiconductor Engineering
SE: System-level design has become broader over the years. From your perspective, how has it changed?
Schirrmeister: System-level design goes back to Gary Smith in 1997 with ESL. He had this vision that we would do electronic system-level design. Are we there yet? The framing for it comes from the fact that the complexity of the things we do — because our customers desire more, and more cool stuff — grows at a higher speed that we can figure out how to build them productively, reliably, and in time for CES. We are working on this. I remember Aart’s presentation from 2002 at DATE, where he said, ‘Hey, we figured out the genome. If someone asks you why you’re working so hard, tell them you’re working on the design productivity gap.’ Some of that is still true, but now it has expanded in scope dramatically. What used to be just a system-on-chip is now disaggregating. There are so many more things that can go wrong. You go from SoCs to systems of SoCs and chiplets, and you need to do a whole lot more things to make them work. So the complexity of the things we want to do is still going faster than we can figure out how to do them. The most interesting angle is that the loops are getting bigger and bigger. When I had my first chip, we had Manhattan metrics for synthesis and it was a very small loop coming from the implementation and feedback. Now we’re doing architectural decisions, figuring out UCIe, chiplets, going all the way down into implementation, optimizing power back up architecturally, so those loops of optimization and design flow coherency get bigger and bigger.
Siwinski: I look at it as two different dimensions. One dimension is abstraction, which always goes up. This has been the mantra all along, and as you move from GDSII to whenever you define system level, it’s always a debate where you draw the line on what’s pragmatic, and what’s academic. But it always goes up. To the point on loops, being able to optimize that and use AI for optimizations on the steps in between and the loops throughout, that’s been happening. It’s something that will continue to happen. There will always be another layer, because complexity explodes and there will always be more. That will be one optimization — abstraction optimization with AI co-optimization. The second dimension of this is going to be across, because as you go to the higher level, the reality is that the care-abouts in your key performance indicators (KPIs), your key constraints, are no longer the low-level ones. It’s not just a PPA trade off. At this point now you have to think about the workload, the application, how and for what it’s going to be utilized. So all of a sudden, the higher you go, the more of an explosion you have on this plethora of end applications and use cases, which opens a great opportunity for different kinds of training optimizations. In a way, it’s a dichotomy because you went from something that was maybe one thing and you try to make it very specific, then you made it more optimized and reusable and tried to be able to have a similar process to build things. But the higher you go, you open up more degrees of freedom yet again. It’s almost to the point where it’s too much. So you really have to be smart about how you do some level of the divide-and-conquer segmentation training for the right applications, right use cases, and the right set of constraints. Otherwise, it will just continue to be overwhelming. Eventually we’ll get this fixed, but it’s these two dimensions playing against each other.
Mueth: The starting point is the classic V model, except it’s here now and it gets much longer. It’s a multi-dimensional system of systems. One of the guys at ESI had a good perspective. He’s a mechanical engineer who’s been around for a long time, but now electronics is an integral part of that. When you think about an automobile, when I was a kid and driving an automobile, you had to fix everything. You had to set the points, etc., and there was no electronic connection anywhere. Now, electronics are an integral part of every system. And then, looking at the technology, I agree with the complexity point. Everybody wants more of everything. When I was an engineer doing a DOS and Unix simulation, it was very simple stuff. That’s crude by today’s standards. But I don’t think simulation technology by itself is the limiting factor. It’s how you integrate all these applications together and make them play together with all the disciplines. That’s the next horizon that I see.
Hand: There are three things that I see happening together. One is what is the definition of the system, which I think a few of us have talked about. One person’s system is another person’s system of systems. At every level, you’ve got system complexity going up. The question is, at what level do you mean when you draw a system? Is it an electronic system? Is it an electro-mechanical system? That’s one thing that comes into play with the greater complexity. The second thing is that the way systems are being architected is fundamentally changing. If I take the car, for example, in the past if you were architecting a system, the electronics were buried inside the system so you had the car, an engine, and if you wanted the engine to run better you put in an ECU. Inside the ECU will be the chip and the software. Now, the electronics are defining the product itself. If it’s an autonomous vehicle, a flight system, or even high-level cybertronic systems, those are defining what is going into mechanical, what is going into software, what is going into electronic. So fundamentally, the system design concepts are becoming multi-domain at the very beginning. You can’t design the system without understanding the electronics. You can’t define the electronics without understanding the system. They’ve become completely intertwined. So that is another place you’ve started to expand the scope. And in regard to the complexity, you really have three exponential growths that are piled on top of each other. You’ve got the traditional semiconductor exponential growth in complexity, which we manage by exponential growth in the power of them. You also have the applications themselves growing in their complexity, the systems, and the systems of systems, and then the actual multi-domain thing, so it’s exponential on exponential on exponential. That’s the second effect. The third effect is that the cost of failure is way more catastrophic now. If you’re putting these electronic systems in charge of very large kinetic masses, bad things can happen. It’s important to get it right. You can’t find the error, either, during integration because then it’s too late for the CES or whoever’s using that example. But even worse is if you don’t find the problem, and the problem is found when there’s a catastrophic failure. That’s really not good. And the electronics now are going to be the reasons for those catastrophic failures.
Seynhaeve: I agree that we need to start with what is the system level. And when I think about systems, I do automatically think about digital twins, as well, and the level of complexity that’s introduced there and how that’s going to ripple back to traditional EDA. If you look at the digital twins, it’s amazing. You have software talking to chemical stuff, talking to all kinds of different natures, and a lot of how we do that needs to be invented. The way AI comes into the picture is it’s so complex that we need to introduce automation and new levels of understanding analytics that only AI is capable of giving. My take is that we will learn and develop so much at this level, working with digital twins, that it will have an impact on the traditional EDA flow, as well. It will change. It will be automated. It will be different than what we have today. System level is going to have tremendous effect. More than half of it still has to be developed but the future’s bright.
SE: In the near term, if we are architecting systems differently, how does that impact system design methodologies?
Hand: It has to start with, ‘What is the overall challenge you’re trying to address?’ We make a distinction between model-based systems engineering and electronics, and model-based cybertronic systems engineering. With model-based cybertronic systems engineering, you’re looking at what is the role of the electronics versus the mechanical versus anything else. And you’re doing that functional allocation very early on, which means you have to be able to start using synthetic workloads. You need to be able to understand what is the interplay there, and then also have abstraction, whether that’s automatic or AI-enabled abstraction. That abstraction allows you to make tradeoffs, because it’s not a single domain tradeoff anymore. It’s not just a question of how fast can I run the software. It’s, ‘How fast can I do this actuation? Can I do that with a sensor that is of this resolution?’ Everything starts to work together, and you need the tooling to be able to make those tradeoffs. You can’t think of it just in the nice compatible way of, ‘Here’s a semiconductor, let’s go build it and see if it works in a product.’ It’s all intertwined. It’s the reason systems is the largest growing area of semiconductors.
Schirrmeister: A corollary of this complexity is that you have more people involved from different disciplines. Two examples: Jim Keller [of Tenstorrent] gave a keynote that was very insightful, pointing out you have like 600 lines of Pytorch code up there, and that spins out to an amazing number of chips. He showed five layers of complexity. Then, a few weeks ago, I was at a conference where I moderated a panel. I had an OEM, a Tier One, and two semis on it, and I threw myself in as the EDA and IP perspective. The discussion became interesting because you have this complexity of a system. If a car is a system, then 100 cars is a system of systems. The car by itself is so complex that you have so many people interacting with each other that are interdisciplinary from a tools perspective, but also from a company perspective. That’s where things like MBSE come in to hold it all together. What’s the center of gravity? Will it be PLM? Will it be the design flow? It might be the data that brings it all together so it leans more towards PLM.
SE: Is the center of gravity for one company going to be different from another?
Schirrmeister: Absolutely, depending on the company. To make this a bit more controversial, why, if all this happens, have we never gotten above RTL as a signoff point? We are not even at an RTL signoff point. That’s the other thing I found insightful about Keller’s keynote, which was saying, ‘What’s the stuff I did wrong?’ Gary Smith postulated all this in ’97, and he had this hockey stick curve about ESL. Here we are, almost 30 years later, and we haven’t figured it all out. We have figured out a lot, like IP reuse and some of the system integration aspects. We did things like IP-XACT to make sense of the complexity. But still, from RTL through GDSII, so many things can happen just in this contained space that you don’t have that fully automated. Yes, we may improve productivity, but we haven’t gotten there yet.
Hand: It’s the loops you were talking about before, because people only do as much work as they need to do. There are companies that will do an RTL handoff, but they give up a lot in that process. It’s a handoff, there’s a lot of buffer built into it. There’s not a lot of optimization done because there’s no way for those loops to be closed. As an industry, we’ve done a pretty bad job of opening up and having well-defined ways of closing those loops. It works in a nice vertically aligned company where they control everything, because the loops don’t have to be exposed to anyone. Once you start trying to build a virtual vertically integrated company where there are many parties involved, whether you call it a data problem or an interfaces problem, you need to close those loops. If you can’t close those loops, people aren’t going to give up the control. There has to be a way, when someone takes that rough hand-off — whether it be high-level synthesis, whether it be RTL — where you can go and optimize this. ‘I need to have this question answered.’ If it’s an open loop, it’s just not going to work.
Siwinski: The one advantage in all of this is that it’s not an either/or proposition. This is mixed by definition because you can actually choose. Even today, and for the last 10 years, you can take an ESL flow and produce something for a particular piece of IP and do a pretty good job of it, all the way from spec to implementation. Could you generalize the whole chip? No way. But the question is, do you have to? Given the fact that you have a large amount of reuse, we now have the rise of 3D-ICs and chiplets. You have a memory. Do we need to worry about it? Well, maybe you don’t. It becomes a question of disaggregation/aggregation, and where you are going to be adding the secret sauce. Then there is the end application and multi-domain thing. How are the software guys, the analog guys, the power guys, the fluidics guys, the end product, the car definition, all those people who have no language in common, going to bring it down to various pieces across hardware, software, IP, and manufacturing? That will have to happen. So there’s that dimension. But you can still do some level of divide-and-conquer for pieces within.
Related Reading
EDA Looks Beyond Chips
System design, large-scale simulations, and AI/ML could open multi-trillion-dollar markets for tools, methodologies, and services.
Leave a Reply