New Uses For Emulation

Experts at the table, part 2: Is emulation cannibalizing other products? Plus, who’s buying emulation and for what purposes, and why companies are investing more heavily in this technology.

popularity

Semiconductor Engineering sat down to discuss the changing emulation landscape with Jim Kenney, director of marketing for emulation at Mentor Graphics; Tom Borgstrom, director of the verification group at Synopsys; Frank Schirrmeister, group director of product marketing for the System Development Suite at Cadence; Gary Smith, chief analyst at Gary Smith EDA; and Lauro Rizzatti, a verification expert. What follows are excerpts of that conversation.

SE: Emulation is a lot more expensive than other tools. How is that affecting how and which tools are bought?

Borgstrom: Economics play an important part of this discussion. Verification engineers and SoC developers want as much performance as they can get. That’s really a function of the budget they have available for buying tools. Our customers would be happy if they could have as many emulation seats as they would like, but they never seem to be able to do that. It’s not only acquiring, but also the operating cost of emulation, that plays an important role in when to use emulation versus other platforms. As new emulators come on line with lower operating costs and lower acquisition costs, it’s possible now for project teams to invest in their own emulation capacity. They don’t have to build up a special IT center. They can keep it in the existing compute center they have and scale that throughput as their chips get bigger.

Kenney: As I look back at one of our larger customers, they started out having to justify their purchase with, ‘It runs 100 times faster than a simulator.’ That was a big financial decision. Now that they’ve decided to use emulation on all their projects, it’s become, ‘We have more projects and we need more emulators.’ They don’t always get what they want. But the ones that use emulators want more and more.

Schirrmeister: There’s an addictiveness to it. Once you’ve been on it, you don’t want to get off it. We call it a verification computing platform because it’s part of the verification computing platform. Users make decisions about how many simulators they need and how many emulators they need—and how many they will need later for FPGA-based prototyping. That’s continuum is very important.

Smith: There are still companies that go through the capital expense decision. It’s not as if that’s changed.

Borgstrom: It’s a capital expense, but it can be driven by project teams as well as central organizations.

Rizzatti: If you had done ROI calculations a decade ago, emulation would have been a significant percentage of the verification cost. Today, with escalating cost of designing chips, one respin is $30 million down the drain. Emulation is not as big an expense anymore. In terms of dollars, it’s still $1 million to $3 million. But in relative terms, that isn’t that much.

Schirrmeister: The user wants it right now, as fast as possible, and as accurate as possible. Emulation is at a sweet spot. It’s a very good compromise to give the model early even though RTL is still evolving, at full accuracy, with reasonable speed, and it’s something a software developer won’t run away from. But you also don’t want to be the one who tells the project team you could have done a problem on a virtual platform using a model that is more affordable as a simulated environment. There are certain things you can do on different engines with different speed and accuracy, but you need to be very conscious of which questions you’re asking. You may need an RTL-accurate model, and emulation is great for that. If you’re just getting the functionality of some software right, which is not yet timing dependent, the RTL representation would be a waste. It’s really a question of not making a mistake. If a problem comes back and it’s traced to a higher-level model, that’s the last time you use one of those.

Rizzatti: It’s true for mainstream designs. But when your design exceeds 100 million gates or 200 million gates, I don’t believe FPGA prototyping is even possible. As you go up in complexity and size, emulation is the tool. Everything else doesn’t work.

Schirrmeister: You would not have the same design that you map into everything because you make adjustments as you go along. But emulation is still in the center things.

Smith: Rapid prototyping falls apart at 49 million gates. It’s the design mapping that kills you.

Schirrmeister: Emulation is also not scaling that easily. Fifty million is 10 chassis, and how many cables connect those? The scalability comes at a price. Speed goes down. That’s why the consciousness of what you’re doing in the various engines is very important.

SE: Is emulation starting to cannibalize other tools?

Rizzatti: Simulation is about $500 million. Emulation will become $500 million very soon, but I don’t think RTL simulation will continue to grow.

Kenney: The market for simulation is reasonably flat, but it’s not shrinking.

Smith: It’s growing 1.3% and it’s at $640 million.

Kenney: But it’s not a matter of they’re not buying simulators so they can buy emulators.

Borgstrom: Simulation continues to find the vast majority of hardware bugs in SoC designs, and usage of simulation will continue to part of the verification process. But the verification problem is so huge that it’s all additive. While simulation continues to grow, emulation is growing faster. FPGA prototyping and virtual prototyping are also in the mix.

Smith: Emulation is growing 25%, and it has been for quite awhile. At that rate, in 2017, it will be $968 million.

SE: What’s interesting, though, is the growth isn’t just in verification. It’s coming from other areas.

Schirrmeister: That’s right. But verification, by itself, has always been additive. You always buy more, and you do things you were not able to do before—specifically, post-silicon verification that you are now able to do earlier. Customers are doing things they used to do post-silicon with an emulation hybrid approach much earlier. And they’re doing things that couldn’t be done before. Verification is never complete. You have done a lot of things to get your comfort level to a certain point, but you could do more. People continue to simulate, do FPGA prototyping, and emulate throughout the production phase because you always can do more.

SE: But are they doing more with an emulator that could be done with other tools?

Kenney: I have an example of that. One customer who needed 100X improvement in speed on simulation had two of our boxes. They weren’t really buying a lot more, until the FPGA prototype team came along and said they’re going to be six months late on the next project, and there was a risk they would not be ready at all. They now own 10 times as much emulation as they owned before because they needed to make sure they ran the whole chip and they weren’t sure they could do it with FPGA prototyping.

Smith: My first experience with emulation was in 1989. I was at LSI Logic. We had two tough customers. SGI called me over to show me something. It was a pie box. He started telling me what it could do. The entire time I was at LSI Logic, Sun only did one respin. Their tapeouts were right on. SGI typically spun five to six times. They did a lot of graphics work and they couldn’t simulate it, so they did a lot of silicon to get it to work. Now they replaced their respins with a pie box. Lauro was talking about the $10 million to $30 million in respins due to mask costs. That’s now going into emulation boxes. They’re replacing respins.

Schirrmeister: Yes, so they’re going for first-time silicon.

Smith: Yes.

Borgstrom: I was talking with a customer that recently adopted one of our systems. Getting 2 to 4 MHz performance on an emulator opened up different types of testing that otherwise wouldn’t be possible. What would have taken a week on another system they can do in less than a day. That enables them to do things they haven’t done before. That’s where this whole verification aspect is additive. They keep doing more and more.

Schirrmeister: What are you verifying? Is it a piece of IP, the subsystem, the full chip at its integration? We see a lot of people using simulation for IP and susbsystems and it’s fine. There are methodologies like UVM. But then they don’t scale to the full chip level. So now you have 100 IP blocks, 5 different subsystems, lots of processors, and you need how to sequence all of that and make sure they talk to each other in the right order with the right dependencies. You’d have a hard time doing that in pure simulation. That’s why we’re always competing on nominal top capacity. But at the end of the day it’s the full chip that runs into emulation with the full system environment. The whole post-silicon aspect is always around the chip and its environment. Now you virtualize the environment. It’s a full-chip thing versus a subsystem and IP thing.

To view part one of this roundtable, click here.



2 comments

[…] To view part two of this roundtable, click here. […]

[…] New Uses For Emulation, Part Two […]

Leave a Reply


(Note: This name will be displayed publicly)