EDA tool providers will serve as powerful allies for customers to develop and implement workflows, and to show them what’s possible.
Experts At The Table: One of the big challenges facing EDA companies is explaining to customers what’s possible, how to streamline their designs, and what can be accomplished at what level of risk. Semiconductor Engineering sat down to talk about how relationships are fundamentally changing between EDA companies and their customers Michal Siwinski, chief marketing officer at Arteris; Chris Mueth, new opportunities business manager at Keysight; Neil Hand, director of marketing at Siemens EDA; Dirk Seynhaeve, vice president of business development at Sigasi; and Frank Schirrmeister, executive director, strategic programs, systems solutions at Synopsys. What follows are excerpts of that discussion, which was held at the Design Automation Conference. To view part one, click here. Part two is here.
L-R: Keysight’s Mueth, Siemens’ Hand, Sigasi’s Seynhaeve, Synopsys’ Schirrmeister, Arteris’ Siwinski. Source: Jesse Allen/Semiconductor Engineering
SE: As the user community develops workflows, how does this get implemented?
Siwinski: You’ll see collaboration in the ecosystem because no one entity/company/braniac, no matter how smart they are, is going to be able to figure out and operationalize it. And we’re already seeing this, whereas 10 or 15 years ago you couldn’t get multiple suppliers to sit in the same room and talk about collaboration. To do that, you’d need a customer screaming at both. It’s different now. It’s accelerating because reality is setting in.
Hand: If you look at EDA and semiconductors together, it’s revolutionary when you look backward, but it’s evolutionary looking forward. It’s always steps on the path. In 3D-IC, you’re already seeing it. 3D-IC is an example of the digital twin on training wheels in some ways because it brings together mechanical, thermal, electrical, electronic, semiconductor. It brings it all together. Someone was talking before about a constrained environment. It’s something we can manage. It’s becoming more achievable, where you’ve got mainstream high-volume products using it. It’s the evolutionary thing, and so you’ll probably look back in a few years and say, we did see ‘that as a result of this’, but at the time it was just like we were attacking the next big customer challenge.
Schirrmeister: There is no single sign that we are done. It’s like verification. It’s always continuing. But the next CES will happen with even cooler products than we have today, and the next car line will happen with new capabilities. But we figured it out to that domain, and then there’s always the next challenge ahead. There’s the analogy about the operational design domain for ADAS. It works within a particular city, and that’s how I think about these flows. It works for this version of the flow. It works for 5G now with 2,000 specs, or for Arteris. Once I looked up the old OMAP specs and I counted how the number of registers went up. That’s why you need tools to manage the register complexity. Perhaps, we don’t give ourselves enough credit, because we always enable the next CES and we are aware that new challenges are coming. But within the operational design domain of that flow, new challenges will be required to be figured out for the next generation.
Seynhaeve: And there’s the human factor. I want to tell a story based on ADAS. It’s a very complex domain, of course, and there are very clever solutions out there. So they had adjusted ADAS to merge on the freeway very fast in Germany, because that’s what you do, but then very carefully in the United States. They had to differentiate it based on the region, but for some reason ADAS did not work in Pakistan, I believe. They figured out there were crowds of people, and the people who go on bicycles and so on, they go through very fast. But our cars with ADAS, they don’t get through. What’s wrong? It works everywhere else but here. Then they figured out that they had to change the system. With pedestrian detection, you’re allowed to push them out of the way a little bit. As soon as they built that into the system, it started working.
Hand: That’s a good point. As an industry, there’s always compromises but we always manage to solve the problems. How many times in the industry has the sky been falling? Oh, the next node is going to bury us.
Schirrmeister: That’s how every EDA pitch works: ‘The sky is falling, buy my solution.’
Hand: But as an industry, we still enable the next node, the next transition, the next level of complexity, the next level of productivity gains. So it will just keep grinding away.
Schirrmeister: And we’ll figure it out as the challenges come.
Seynhaeve: There’s no end result to be expected. There are all these factors that give us surprises and we work through them and we’re never done.
SE: So we’re never done with digital twins, and that’s how it should be, right? Because that means virtually we know what’s happening with our products, even in the field.
Hand: Yes, and we’ve had a digital twin in EDA since we put down the X-Acto knife and stopped using Rubylith. That was the last physical embodiment of the circuit. Since then, it just gets on more and more and more scope. And we’re not even close to the end, because once we’ve got the virtual product modeled and you’ve got the manufacturing modeled, then you can start to merge those together. You can start to have an adaptive digital twin, where the product is changing based on manufacturability. Something that changes in the production line can automatically change the design or the in-life aspect of it. You’re getting data back and it’s like, ‘Oh, it’s lasting a lot longer than we expected. Therefore we can change the requirements on the incoming and the whole thing becomes adaptive. Once we get that down, someone else will say, ‘How about we now go do quantum physics in the digital twin?’
SE: Are the users going to define all of their workloads? Is it incumbent on them to define all of the workloads and tell EDA?
Hand: No, because we have conversations today with customers that have come to us and say, ‘We know our system design is broken, tell us how to fix it.’ And the answer then is, ‘Well, tell us what you’re trying to do.’ Different companies will do it differently, so it’s a collaboration. There’s no one answer. There’s no one company. There’s no one solution. There’s going to be a collaboration on, ‘What is the problem you’re trying to solve? What are your care-abouts?’ And then we will work through it with them. I don’t think the customers can figure it all out because they don’t know what is possible. We can’t figure it all out because we don’t know what is needed.
Siwinski: Since we’re the ones who are actually implementing it, that puts us in an interesting position, because we are the intelligence brokers. We can give them answers even for the questions they don’t know how to ask. These used to be purely transactional relationships. You buy a tool from a vendor. Now, it’s a different level of partnerships and trust advisors. A lot of this stuff is intersecting through our technologies. So we actually do have that extra ability to ask a question about what is important to the customer on this or that. And because the customer has many cases, they don’t actually know. They’re just focusing on their own thing, which is hard enough.
Schirrmeister: They also don’t know what’s possible. So that’s why you always have to start with open-ended questions to understand what they want.
Mueth: The application engineer is very valuable for the vendors.
Seynhaeve: There’s actually an answer here that was given to us by the venture capitalists. They look at three different markets. The first market is the ‘hair on fire,’ where the problem is clear. There are many people who come up with many solutions, and people throw money at it, and it’s hard competition, and the solution is marketing, marketing, marketing to get a solution out there to be sold. The second problem is, ‘It’s a hard problem.’ People know there’s a problem, but they really don’t care about the solution. For instance, paying with a credit card and replacing that something that works, ‘but we’re not really interested.’ The problem is money transfers. Then there’s the visionary problem, where you have the vision, but you need to sell the people on your vision. Now, with the hair on fire, AI is definitely in that category. Everybody says we want it, we want it, we want it. But then we ask, ‘What’s the problem you want to solve?’ Most customers are not going to have an answer for you. But they want AI. Now, you guys have a wonderful thing done with AI mostly with machine learning by doing design space exploration and things like that. They still want more, but they can’t define what they want.
Hand: It goes back to that they don’t know what’s possible. We ask, ‘What are you doing? We know it’s interesting. We can see potential but we don’t know what’s actually possible. Is it possible to give me a little bit of a speed-up? Is it possible that you can give me something if I’m willing to give up something?’ And that’s why it becomes a very different relationship. It changes it from being a transactional relationship to one where you’re looking at how you partner with the customer. How does the ecosystem partner together, because it is trying to solve a much larger problem? It’s no longer, ‘Let me build the world’s fastest chip and find an application.’ Even for the customers, they’re trying to build the solution.
Mueth: In their minds they probably have an expectation that AI is going to bring in the next generation of automation. But then they can’t define that. They just know, ‘I should be able to do my job better or faster.’
Siwinski: This is where the trust advisor comes in, right? Because in many of those meetings, the question is, ‘Well, what can you guys do with that? What can we get? We need to get something. What can we actually get?’ And then it comes back to this evolutionary grind, because again, AI is not new. It’s just evolving to the point of becoming really interesting.
SE: So what’s the next question to the customer, to the industry?
Siwinski: If we bring it back to the AI conversation, its figuring out the right level of training models that are really application-specific and multi-domain and multiple levels of abstraction. That’s going to be the next challenge. Because again, AI without the right training and the right constraints is just math. Without the right heuristics around it, it’s garbage in, garbage out. Being able to refine it is where the application engineers come in. ‘This one did the discussions, this one did the partnerships.’ This is where all that know-how comes in and basically transforms the nature of how you’re actually solving the problem. You’re trying to move from just brute forcing it to guiding it, to become less brute-force. The guiding process is actually very hard work, too.
Hand: But you could also take it up to a higher level of abstraction. ‘What are the tradeoffs I’m making? What is the impact on risk?’ Ultimately, risk is what customers care about. They will give up a certain amount of control. They will take productivity gains, as long as it’s improving the overall risk equation. And that’s a hard thing to quantify. If you’re giving a big productivity improvement, is that coming at the probability of increasing risk or lowering risk? If I’m going to standardization, is it increasing risk or lowering risk? That may be one way of looking at the whole problem. If you look at everything as a risk tradeoff, how do you quantify that risk? And how do you look at all these tradeoffs you’re making and see which direction it’s taking that risk equation?
Mueth: You need to bound your problem. So the person who is doing the training really needs to understand the boundary conditions and how to train. That isn’t your typical engineer. So it’s probably back to the vendors to help with that. The other thing we find in building physics-based models with AI is that you’re giving up knowledge and detail of the model. You have a higher level of abstraction. You don’t know how it got there. It just got there. That can be a little bit dangerous.
Hand: On the flip side, you’re then able to do a set of verifications with that abstract model that may not be completely accurate, but it’s still things that you wouldn’t have been able to do before. So you’ve introduced some new risk, but you’ve taken away a big chunk of other risk. And that goes back to, ‘If you aren’t building the models, could you have done that verification before?’ You may not have, because building the model would have been too hard. Using the full fidelity would not have been practical.
Mueth: A hybrid approach is good, because then you have a blend of your basic knowledge in the areas that are hard to characterize in your model. If you can live with the abstraction and a derivative, then that just becomes part of your model. At least you have some level of confidence you can count on.
Schirrmeister: To the question of what’s the next question, we do have these conversations with customers where we ask them, ‘How big will your design be? How much bandwidth will you get in and out? How many specs do you need to support or meet for 6G versus 5G?’ Those are the known questions to ask, and we’re doing this. Where it becomes really interesting is when they are starting to predict, and their predictions may not be 100% right anymore. Or they don’t know what they want or where they want to go, partly because they don’t know underneath what’s possible. But as part of that process, the industry figured out that there comes a time when everything won’t fit on the chip anymore, just because of reticle limits. And we sat down and figured out solutions. That deserves quite a bit of credit.
Leave a Reply