Decisions about what to run in-house are complex, and may vary from one company to the next.
Discussions about cloud-based EDA tools are heating up for both hardware and software engineering projects, opening the door to vast compute resources that can be scaled up and down as needed.
Still, not everyone is on board with this shift, and even companies that use the cloud don’t necessarily want to use it for every aspect of chip design. But the number of cloud-based EDA tools is growing, and so is the number of proponents who argue the cloud can provide better flexibility in deployment, design scale, capacity, and remote collaboration capabilities. And despite early concerns about security and licensing models, they insist those are solved issues.
“Why are engineering teams looking at the cloud? First of all, they’re simply running out of capacity,” said Sandeep Mehndiratta, vice president for enterprise go-to-market and cloud at Synopsys. “Everybody’s capacity needs are different. And because of the complexity of the designs, and overlapping projects, it’s a flex mechanism. The proliferation of Amazon, Google, Microsoft, gives them the option now.”
Whenever there is more work than an on-premise data center can handle, and/or where there is pressure to complete a project faster, it creates a big problem for small and midsize companies. Even if they want to scale up their on-prem capacity, they cannot do that fast enough.
For smaller companies, the biggest concern is total cost of ownership, because their core competency is not IT expertise. “Add this to the fact that for other applications, there is a massive movement to cloud anyway,” Mehndiratta said. “If you look at the traditional method of what customers do today, they manage the deployment. They manage the flows. They manage the data center or whichever cloud provider they want to use. EDA vendors provide the tools and services, but they do everything else around it.”
Still, IC design and EDA are quite a bit more complex than a lot of the other vertical industries that have embraced the cloud.
“On the generic IT side, there are standalone domains like HR software, financial software, IT service software, customer relationship management software,” said Craig Johnson, vice president, EDA cloud solutions at Siemens EDA. “All of those are independent silos because the users of those services tend to be aligned with those categories. In silicon design, there are different types of engineers, from front-end logic to layout to timing to analog. There are a dozen or more specializations, but the designs have to be managed consistently across that whole flow. So the difference is there must be an environment where the individual applications work well in a cloud environment. But the connection of those applications, and the handoff of the data as it moves from its nascent form all the way to its final tape-out, have to be preserved. It’s that complexity of the flow that is also a contributing factor as to why there are still huge investments in data centers today, and why those have not been abandoned for the cloud.”
Fig. 1: Total cost of ownership tradeoffs. Source: Siemens EDA
Many of the applications that have successfully moved to the cloud have predictable results and compute costs. “Salesforce.com is really an application where there are people accessing data and viewing data in different formats,” said Johnson. “It doesn’t require large compute on the back end, but there needs to be sufficient compute to distribute the information in a timely way so there’s not latency that becomes onerous. But that’s really about lots of little machines at the edge of the cloud. It’s completely different than, say, doing physical verification of a large design. Some companies could easily use a dozen large-scale servers that have multi-terabytes of memory in them.”
For many of these applications, high-performance computing isn’t as important as volume and cost. “It’s quite predictable for them to know in a SaaS application what you’re thinking about when you’re talking to Salesforce,” he said. “They’re taking away all of the complexity of dealing with hardware. They can do that because they know, for each user, how much storage that user will require, how much incremental compute that will require, and it’s going to be steady for the duration of that subscription to the environment. You can’t do that with EDA. You can have one verification engineer who, on Tuesday, maybe she’s only launching 10 sorts of simulations, but by Thursday she wants to launch 1,000. That’s a highly variable amount of infrastructure and resource, and it’s another reason the semiconductor design world isn’t immediately able to flip from an on-prem data center to a cloud environment.
EDA has a lot of tools, and they require many types of processes. “Within the tools, there is not one just type of compute, and managing those tools is complex,” Mehndiratta said. “There are careers made around this. There are job reqs posted around EDA tool management.”
Other dynamics that are part of the EDA-on-cloud considerations include the hardware technology refresh cycle, along with access to the right type of hardware while still in a depreciation cycle for on-premise hardware.
Imperas said it is flexible as to where its tools are used. Its simulation technology has been in active use by customers with off-premises cloud-based systems, both public and private, for a number of years, said Simon Davidmann, CEO of Imperas. One example involves Imperas’ OpenHW on RISC-V verification, which includes a regression test framework set-up on the Metrics Google cloud-based environment.
“This is a solution that works well with the open-source projects with many member-contributors,” Davidmann said. “Not all accelerators are equal, and given the wide range of application and targeted datasets, the debate on the merits to keep a data center private is not over yet. In fact, the data center accelerator market is fast becoming a target for new designs and innovation, as we see in the growing number of customers focused in this space.”
For most engineers, the software they need runs quite well on AWS or Azure, according to Rupert Baines, chief marketing officer at Codasip. “Even for the AI people, you can spin up instances, you can run GPT, you can run YOLO, you can run all of those things to your heart’s content, spin them up, and spin them down in Dockers — infinitely, and even a Netflix or an Airbnb. There’s no point in doing their own data center. In the EDA world, unfortunately, not all of the software can do that. And legally, architecturally, performance-wise, there are still too many niggles about it, which is a consequence of the business models of the EDA companies. And it’s a bit of a market failure, because really what you want to do is spin up a simulation run where you have a terabyte of RAM, and AWS will happily sell you a server with a terabyte of RAM. It costs a fortune, but it’s cheaper than buying it yourself. Then, when you’ve finished simulation, you wind it down. That’s the way it should be, and that’s the way it will be once we get through the niggles about pricing models.”
Ketan Joshi, business development group director for cloud at Cadence, agreed that each situation is very different in terms of design needs, the type of process node being used, and the type of functionality being used for IP. “Most users do know the equation of needing to calculate the server costs, storage cost, network and security, or data center cost, keeping some spare hardware. What’s missing from the equation are time to market and engineering productivity, and these are big ones. Many times IT organizations don’t take that into the equation, and it’s a very important factor. When it comes to time to market, you may have a certain class of machines that may not be most optimal for the latest chip or the system that you’re designing, whereas those latest machines are available in the cloud. If you were to go to cloud and the scale allowed you to speed up your verification or your sign-off and your implementation by even 10%, that is millions of dollars in terms of the cost savings and the time-to-market window.”
Depending on the end application, this could be in billions of dollars in lost opportunity. “If instead of 100 machines you have on-prem, now with cloud you have thousands, could you speed up your design and hence what would be the cost? That’s one aspect to discuss,” Joshi said. “And what about engineering productivity? When you have a lot more machines, your engineers are going to be able to explore more design space, as in, ‘If I were to implement this architecture in this different way, what would be the implications of it be?’ If you have more compute available, in the same amount of time you can look at multiple alternatives. And because design innovation tends to be a core tenet for any company’s success, if you can explore more innovation, that’s a huge win to you.”
Additionally, there is an aspect of design confidence, he said, because there are so many scenarios to validate in terms of functionality. “Verification is a problem that’s never done. If you had more resources available through the cloud, you could get your design confidence to a higher level and avoid potential re-spins.”
Letting go
There’s a psychological aspect of putting EDA completely in the cloud, as well.
“Do you trust that the infrastructure is secure when you’re using it? There’s a tendency within the semiconductor world to want to control all aspects of the solution,” Siemens’ Johnson noted. “Semiconductor engineers are so technical, among the most brilliant people in science. We tend as an industry to want to put everything together in a way that we designed it, even though it may not always be the most elegant and complete way to solve it. So there’s an element of getting comfortable with methods that others can provide, but that we want to do ourselves. That applies to data, and it applies to the flows and processes for development.”
EDA use cases are extremely complex, and they are implemented uniquely in each customer. There’s not just one way to design a chip, or one set of tools to use. As a result, there isn’t a single path for every customer. This may be one reason for the lack of cloud adoption until now.
But things are changing. “Cloud vendors have hired resources and teams that are more semi-focused,” Johnson said. “They’ve hired people out of semi companies and out of EDA companies. They’re savvier about that now. Add to that, the big cloud companies are designing their own chips now, and as they do they’re starting to use their own cloud infrastructure, and then bump into all of the things that EDA customers bump into. Longer term, clouds are going to be the new compute model. But it’s a journey to get there, and it will happen in fits and starts as it makes sense for EDA users.”
As to when EDA will be completely in the cloud, that’s not entirely clear. “First of all, the big EDA companies have offered cloud-based tools for a long time, and it hasn’t been that popular,” said Walden C. Rhines, president and CEO of Cornami (and former chairman emeritus at Siemens EDA). “One reason early on was that people had an expectation that if you bought it in the cloud, you could buy it by the hour, by the day, or by the week, and the established EDA companies want to make a transition to the cloud without compromising their revenue. Their pricing models were such that just by going to the cloud, you weren’t going to save a lot of money compared to running it on your own servers. You would save money not buying servers, but the cost of the software didn’t change a lot. And I’m still speculating that the major EDA companies will continue that path. And I hear debates when Joe Costello gets on a panel and tries to convince everybody that they need to go to his cloud-based tools, and he says you’re paying too much and you need to be able to buy by the day and by the hour, and that gets very little response from the major EDA companies. I’ve heard him make the argument that it’s cheaper and more flexible, but I’ve never heard the argument that says the EDA industry is not providing me something I need that could get me to market more quickly.”
Cadence’s Joshi added that if someone had a crystal ball and told him in five years everybody will be doing their chip design in cloud, then EDA companies could just focus on that. “But that’s not going to happen,” he said. “There is going to be a spectrum for a while, and that’s why we will continue to focus on using more cloud technologies to have as much parallelization in the algorithms to make it successful.”
Connected with this, Joshi pointed to the intersection of cloud and the use of AI and ML. “New EDA tools are coming to market that are highly ML-driven, and they scale really well with cloud because when you’re looking at AI or ML, you are looking at exploring the design space in a meaningful fashion. As more EDA users look at that and see they need to try 10 different scenarios, they see that’s a lot of on-prem compute needed. That’s where cloud comes in. Matching and enabling AI/ ML capabilities with the scalability of cloud is an innovation that’s going to change how the industry designs.”
Leave a Reply