The Race For Better Computational Software

Cadence’s president, Anirudh Devgan, looks at big shifts toward hardware-driven verification and simulation and the impact of AI on the whole design process.


Anirudh Devgan, president of Cadence, sat down with Semiconductor Engineering to talk about computational software, why it’s so critical at the edge and in AI systems, and where the big changes are across the semiconductor industry. What follows are excerpts of that conversation.

SE: There is no consistent approach to how data will be processed at the edge, in part because there is no consistent vision of what the edge will look like. What’s your view on this?

Devgan: That’s a very good area for new product development. There are all these AI accelerators, but these are basically matrix multiply/accumulate types of things. There are a lot of them, too. There are something like 50 companies developing them. But the key thing is the software part of this. Right now, a lot of companies are doing this themselves. It’s not clear if that will work, or whether you need some framework for all of them. TensorFlow does some of that, but it’s not enough because you need data management. So there’s a need for an edge/ML data framework. Some of the the big guys are trying to do that, and so are some of the startups, but right now there is no good solution.

SE: We’re certainly seeing some of that with Google Home and Amazon’s Alexa, but a lot of people want to keep their data local.

Devgan: We need some type of enterprise software to do that, and right now there is a market for that. But it’s not clear yet what will be on-premise and off-premise. A lot of our customers are pushing into the system level and the AI space. The main strength we have at Cadence is computational software—doing all of this numerical analysis. That’s what EDA has done for years. We want to extend that into the system and AI space. This is enterprise-class numerical software. The interesting thing in this space is AI is inherently computation.

SE: How does this fit into Cadence’s overall focus?

Devgan: We have three main areas. The core business is EDA and IP. The second area is system innovation. The third area is pervasive intelligence, which is another name for the edge.

SE: Is this concentric circles, or three areas with overlap?

Devgan: There is overlap between them. EDA is part of system, which is part of intelligence. EDA has to merge with the system and the AI world. If you look at the system market, technical software is about $50 billion or so. That includes PLM and other areas like embedded software. The computational part is system analysis, which is about $5 billion to $6 billion. We have a big investment in system analysis and Green Hills for embedded software. For the third part, AI and intelligence, the computational part is software. There has to be what is basically commercialized Android software. All these companies cannot write this software. We are interested in a hardware/software platform for the edge. I haven’t found any good ones yet.

SE: Is that even possible until you have some consistency? Everything is in flux, and it seems like everyone is coming up with a new architecture.

Devgan: The number of architectures will have to consolidate. Otherwise this cannot be deployed. Right now the market is changing too rapidly for us to get involved, so we are just observing today. But over time, this has to be solved.

SE: Do people start developing to a hardware platform, because that’s the least expensive option, or does everyone continue with custom development? In the past, you had an Intel chip and a Microsoft operating system, and everything had to work with that.

Devgan: It’s not just the hardware part. The software that needs to be developed is extremely expensive. This whole move to sparse calculations is well known in mathematics. Neural networks cannot be dense, but the initial implementations actually are dense. Making them sparse can provide a huge improvement just on the software side. But there has to be a good software stack, and that has to be shared.

SE: Another side of this is how much precision do you need? That can vary, depending upon who’s using it and when.

Devgan: All of that is in the software. In the long run, the flexibility of hardware will win out. That’s Nvidia’s point. The GPU has a lot of flexibility. So you go CPU, GPU, and then more and more specific hardware. But if you go too specific you cannot do the software innovation. So there is a balance there.

SE: There is also a move toward different ways of achieving the same benefits as hardware density through things like pattern recognition rather than individual bits, and reading memories in multiple directions. How does this affect tools?

Devgan: There’s a resurgence of packaging, whether that’s 2.5D or 3D. You see that with memory. Going back five or six years ago, everyone was talking about 3D-IC. Then it settled down, and now it is exploding. There are almost too many variants. We are trying to support them, and we are participating in the ‘More than Moore’ with our tools. We have a big partnership with DARPA. The Defense Department does a lot of chips, but they do a lot of packaging, too, under the Electronics Resurgence Initiative. There is room for improvement in the whole PCB/packaging area because there are too many variants. These are big boards, and there are a few hundred components. That’s a complicated PCB, and a lot of people still do those by hand. But you can apply a lot more algorithms to that. We can place up to 100 million objects. Based on that, we have a new initiative using ML to accelerate packaging and 3D.

SE: So you’re trying to figure out what needs to get accelerated and what doesn’t?

Devgan: Yes, and it’s like an Alpha Go kind of algorithm. The brute force would be a genetic algorithm or a simulated annealing. To me, 2.5D and 3D will grow, but we need to provide more automation. Otherwise it’s too complicated. And then, you need to analyze all of this. We have Allegro to simplify the layout and design, and Clarity to simulate all of this. The simulation takes forever. With 3D memory, it takes a month to simulate.

SE: Are you seeing any push toward silicon photonics?

Devgan: Yes. In Palladium (emulation) and Protium (FPGA-based prototyping), we have optical interconnects. These are data-center-class machines, and optical is necessary. The question now is how you design all of this. We do have partnerships to design optical components with some of the startups in this space. And then, the big foundries also want to do optical design.

SE: Is this in-package, or chip-to-chip?

Devgan: It’s from the chip, through the package, with a connection back to the package. The drivers are on-chip.

SE: Swapping topics, what’s happening with open source EDA? Is that ever going to take off?

Devgan: I don’t think so. This is so complicated and there is so much invested in the current technology. DARPA has an effort in that, but it’s difficult. Wall Street asks us why we have such a big R&D budget. It’s about 35% to 40% [of revenue]. But if you look at advanced packaging, that requires 50% to 60% of your software. Everyone thinks what they do is complicated, and we’ve heard that with software, but what we do in EDA is even more complicated. And it changes all the time. Especially on the software side, it’s very difficult.

SE: How is China impacting your business?

Devgan: We’re watching this closely. We have a very broad customer base, so that helps.

SE: What else is new for you?

Devgan: System simulation and analysis. We estimate that market is about $4.5 billion to $5 billion of extra TAM, versus the current EDA and IP, which is about $10 billion. Embedded software is about $3 billion to $4 billion. Simulation and analysis can provide a lot of growth for Cadence. The exciting thing for me is it’s simulation. And it’s not limited to just chips. In the end, you get to simulate the human body. The demand for simulation is going to increase, no matter what market you’re in. That’s good for several reasons. First, it’s synergistic because it’s computational. Second, it’s a growing area. Third, it’s a very profitable business. So if you look at simulation in EDA, it’s very profitable. But simulation outside of EDA is very profitable, too. Those are good characteristics. It’s a big market that’s close to EDA and it’s a profitable market.

SE: We’ve been hearing about a lot more respins, despite better simulations, and an increasing emphasis on reliability in automotive and eventually medical.

Devgan: That’s a matter of how good the simulation is. Today you have to break this up into pieces to simulate. If you can simulate all of them together, that will help. Today it can take up to 30 days to simulate a system, which limits how much you can optimize that. If you can reduce that to overnight or one day, that has a big impact. And on the functional side, there’s emulation. So there’s electrical verification, and we launched new tools to simulate those. And for numerical simulation, you want to accelerate them in hardware. On the logical and software side, there is emulation and prototyping. A lot more people are using those tools than in the past.

SE: Is this a function of new nodes or different architectures, or both?

Devgan: It’s both. Recently I went to Japan, and as you know there are a lot of automotive companies there. But they used very little hardware in the past. It doesn’t have to be 5nm. It could be 65nm or 28nm. But they do use hardware for software bring-up, which is critical for all of these things. And now they’re using more and more emulation, and that’s very good for RTL bring-up. But then they have to give their models to the software guys to write the software, and they want to do the software development earlier. They don’t need all the debug in software. They just need a faster system. In the old days, you would do a design for 18 months to 2 years, depending on the size, and then you would spend a year doing the software bring-up. Now they want to do it earlier. If you don’t do hardware emulation and software prototyping, it’s impossible to get it right the first time. There are a lot of scenarios in automotive chips.

SE: So what you’re looking at is more design in context?

Devgan: Exactly. You want to see how the software will behave in a device like a phone or a car. And the same is true for AI chips. Let’s say the chip tapes out and the model comes later, but the software model could be available before that. That also can be given to their customers as a reference, so the customers can see what it’s going to look like.

SE: How about in-chip analytics?

Devgan: That’s a big area, especially for verification. And there’s the whole concept of lifecycle management with that. If you do a design and you get some PPA, that’s great, but you can run tools for another year and still not catch all the bugs. Adding analytics and ML into verification is a big opportunity for us. Some customers are doing it themselves, because it’s such a big problem. The design itself is a big data problem, and verification is exponential complexity. And then there’s the whole question of what happens in the field.

SE: So how do you describe Cadence these days? Is it an EDA company or something else?

Devgan: We are a computational software company.

Leave a Reply

(Note: This name will be displayed publicly)