Calculating Emulation’s Complex Cost Of Ownership

As simulation runs out of steam for large SoCs and their software, the high cost of emulation, long set-up time and power and IT requirements are coming under scrutiny.


By Ann Steffora Mutschler
Hardware emulation or hardware-assisted verification –whichever term you choose—has been around for decades. But until recently it has seen only modest adoption due to the high cost, long set-up time, power and IT requirements, among other things.

But with simulation running out of steam between 50 and 100 million gates, this specialized hardware makes a good case—at least technically. From a business standpoint, it’s a lot harder to sort out the cost-of-ownership tradeoffs.

Interestingly, some large semiconductor companies have been saying for a number of years that they believe simulation will slowly disappear and will be replaced by emulation, according to Lauro Rizzatti, senior marketing director of emulation at Synopsys, who recently joined the company with the EVE acquisition.

“I think this is an extreme view,” he said. “I don’t really buy entirely into it because simulation will still give you an immense flexibility. There is no issue with the set-up time, which is a big issue with emulation. Once you have your RTL code, you push the button and minutes later you simulate. The same in emulation will take one week, two weeks, three weeks—so it’s a big difference. And then if you port it back in, you have the maximum what-if analysis capabilities. You can do what you want; it’s a software program so the flexibility is all there. In hardware [emulation] today all three vendors reach a point where debugging is excellent but still is not at the same level as simulation. For these two reasons, simulation will continue to exist for a long time, but certainly it will not have the prime time when you are verifying the entire design, the entire SoC, and it’s not even applicable for any level of embedded software. It would be extremely slow for that. So just the software is killing simulation and big designs are slowing down the performance to the point where you have to limit what you are doing. In the early stages—block level and small runs at the full level of the full chip—will continue.”

Mentor Graphics put together some examples of the cost of running a simulation farm including buying the workstations, the simulation licenses, power and cooling compared to running an emulator and executing the same number of verification cycles, according to Jim Kenney, director of marketing for the emulation division at Mentor Graphics. “Pretty quickly the simulation farm becomes more expensive. It doesn’t mean that people shouldn’t use them because you get away from cost-of-ownership and start getting toward what they are really trying to do.”

Engineering teams are performing block-level verification on simulation, but when it comes to trying to run full chip, including software, the simulator becomes limiting in terms of how much can be run. And, he pointed out, almost everything that’s an SoC would have a significant amount of software. “In fact there’s hardly any single processor SoCs out there. They’re all multicore and ARM is pushing the modeling in multicore.”  Simulation can’t handle this kind of complexity.

“What we’re dealing with [our emulator] is people trying to run code on their chips,” Kenney said. “They’ve got some new hardware in there, like a GPU, which seems to be pretty popular nowadays. Then they have the device driver, which for the GPUs are very complex. They want to run these together. But before you can run the device driver, you have to boot a real time operating system (RTOS), then the driver runs in conjunction with that. To get to the minimum level of verification where you want to run this device driver against the new hardware, you have to run a lot of software. You’re very quickly out of the realm of simulation. If you want to write a System Verilog testbench and go check all the registers in your GPU; if you want to build up an image for it to render, a single frame and pass that via testbench, that’s all great for simulation. If you want to do a lot of frames, it’s going to take a long time. If you want to run your device driver against the GPU, you’ve got to boot the RTOS and now you’re not using a simulator anymore.”

To compute the cost-of-ownership for emulation, raw verification cycles can be compared but intent is important too, he mentioned. “What are you really trying to do? There are things that are practical on the emulator that are just impractical on simulation,” he said.

Frank Schirrmeister, group director for product marketing for system development at Cadence said, “There is no universal answer—it depends on each customer very specifically. Partly it depends on the application. But if you think about the total cost of ownership, you first need to define what for the customer is the cost-of-ownership.”

First, the operating costs are considered. There’s the cost of acquisition. Then there’s the operating cost, including floor space, electricity for the box itself, cooling and backup power. On top of that are the bring-up time; testing to get the machine up and running; amount of downtime; outage and failure expenses; upgrade costs; IT team; network; backup; maintenance costs. Also included is some overhead management time for managing this as a shared resource; otherwise it’s just limited to a project.

From a usage perspective, cost-of-ownership mainly comes down to power consumption, compile time, design granularity and number of users in parallel, he noted.
“It is a non-trivial equation to go through,” Schirrmeister said, “and very dependent on the customer’s design and the customer’s needs. If it’s just for one project then the granularity doesn’t play a big role, but if its lots of users in parallel, then granularity and access and turnaround time to map designs fast is becoming very important.”

The bottom line is that if you have a lot of verification to do, the emulator is going to be a lot less expensive, Kenney asserted. “The thing that the simulation farm does for you is support many, many users. If you have a whole bunch of people doing blocks that are of a reasonable size to do in simulation where you get some decent performance and each of them can run 10 jobs at a time you get much bigger number of users on the simulation farm.  But if we’re talking about some of the heavier lifting tasks of doing full SoC verification with software, then the emulator is much more cost-effective.”

He concluded they both have their place. “That’s why people buy them both and use them both, but if you’re looking at doing a lot of verification on something that can’t be broken up into lots of pieces on a single, complex device including software—the emulator is much more practical and also much more cost effective.”

Schirrmeister agreed but takes it one step further: “We see a continuum of engines being required because each of the four engines—simulation, acceleration, emulation and FPGA-based prototyping—has its reasons for existence. We don’t think there will be one piece of hardware doing it all. There will be a continuum of engines throughout the design flow.”

Additional resources:
An article from Freescale on this topic

A Sneak Peak Inside Nvidia’s Emulation Lab

Webcast: “Accelerate Time-to-Tapeout with IC Compiler Custom Co-Design

Article on mixed-signal SoC verification

Article on using wreal model for MS verification.

Leave a Reply

(Note: This name will be displayed publicly)