Experts at the table, part 1: What’s driving the changes, who’s using it now and for what, and can emulation become a $1 billion market?
Semiconductor Engineering sat down to discuss the changing emulation landscape with Jim Kenney, director of marketing for emulation at Mentor Graphics; Tom Borgstrom, director of the verification group at Synopsys; Frank Schirrmeister, group director of product marketing for the System Development Suite at Cadence; Gary Smith, chief analyst at Gary Smith EDA; and Lauro Rizzatti, a verification expert. What follows are excerpts of that conversation.
SE: Emulation is being used by many more parts of a chip company than in the past. Why?
Rizzatti: The first time I saw an emulator was 1995 and I was shocked by what I saw—a huge number of cables in the room. You never knew if the problem was the emulator or the design because the cables kept coming unplugged. Fast forward 20 years and the technology is impressive. There are very few cables, the sizes are reduced and there is lots of computing power. Today, emulation is used by virtually everybody. It’s not the traditional CPU and graphics. It’s in multimedia, networking and storage. It’s a mainstream and growing. This is a big change. For 10 years it didn’t move from $130 million or $140 million. Now it’s going through the roof. Whether it will become $1 billion in three years I don’t know, but it certainly will continue to grow.
Borgstrom: Traditionally, emulation has been used by the SoC verification team. What we’ve seen in the last few years is performance of emulators dramatically increasing to make it more and more attractive to software development and to facilitate early hardware-software bring-up. The latest emulators have high enough performance to serve the use mode of hybrid emulation very well. That’s targeted at architecture validation, where you can use the emulator to represent a cycle-accurate model before you really get into the core RTL development, all the way through hybrid emulation of virtual prototypes for early software development, even in the case where you don’t have RTL models for some blocks in your design. When you talk about who’s using this, it’s not just a question of which chip companies. It’s which groups within a chip company, and that’s what we see expanding.
Schirrmeister: I used an emulator in 1994 that was an M250. There were cables, but they were very stable, and I was impressed by how my team could do things we could never do before. We were able to find bugs in our MPEG decoder that we had no chance of getting to, particularly for things like audio-video sync. We have improved. Jim Hogan talked about 10 use models for emulation. We’ve added two more for embedding testbenches with hybrid for software development. There are two aspects there for emulation. First, it is an investment, and you want to make use of it, so the versatility of what use models you support from verification to software to low-power analysis. People are doing gate-level simulation to validate the gate-level netlists. They’re doing things like post-silicon validation. It is a valuable resource. That will drive usage up further. But this whole notion of whether it will be a $1 billion market is uncertain. Even with all those use models, it’s hard to see where that money will come from if we’re looking only at the existing semiconductor market. So the question is how we can do more systemic things, connecting different emulators. But at the end of the day you map RTL into an emulator and you execute it. There will be growth, but I’m not sure whether it will be $1 billion in a few years.
Kenney: I had a customer tell me if they find hardware bug in their design with the emulator that’s gravy. That’s not their primary goal. They’re doing a couple things. More and more they’re doing performance characterization. They want to know how long it takes a router to move a packet from this port to that port. They’re doing post-silicon verification, which means they’re preparing for when the silicon comes back. They’re getting all their software and their whole test environment validated. So the chip comes back, they punch a button and it doesn’t pass, then it’s the chip and not their validation environment because they’ve already verified that on the emulator. So the things we’re more focused on are performance characterization and system-level performance analysis.
Smith: With more emphasis on multi-platform-based design, more companies are buying a (Qualcomm) Snapdragon platform and hooking up their own applications to that. Once you start doing that, you’re into ESL design. [Mentor CEO Wally Rhines] was saying that somebody bought an emulator and he never heard of the company. That’s going to be happening more and more as multi-platform-based design becomes the main design flow. Once we move to that, the companies that were using multi-platform-based design will have lowered their cost of design by 44%. It’s a major breakthrough as far as cost of design. What will adding another couple million dollars to your tool costs going to do? That’s one of my projects to figure out. If you look at the flow we put together last year, the silicon virtual prototype needs it to talk to the software virtual prototype, and then after that, when they pass it down to do the drivers and the middleware, they need to use the emulators to talk to the chip assembly team. There are two primary uses of this. The first is communication. It doesn’t fill out the entire loop because the architect isn’t included. Then, once the communication is done, it reverts to verification. But the growth comes out of new customers—new seats.
Schirrmeister: Emulation doesn’t stand by itself. It’s only one part of the puzzle. There is a link into software prototypes and acceleration of verification with simulation. It’s time to redefine the market. What we used to call ESL has been redefined and melted into some new combination of things. It’s really everything from TLM through RTL simulation through emulation, plus FPGA-based prototyping. Prototyping is about $350 million to $400 million. EDAC talks about $400 million, with $100 million in FPGA prototyping. So it’s very important in the continuum. Post-silicon validation prep is a key use model for emulation, too. So it’s really five platforms: virtual, RTL simulation, emulation, FPGA, silicon.
SE: How is this having an impact on chipmakers? Are they using an emulator versus something else, and are they doing it in a different order?
Borgstrom: The lines between these solutions are really blurring. Teams that need to do RTL verification are also bringing up hardware-software stacks, and teams that are doing pure software development need access to early versions of an executable model. The boundaries between one team and the next are really blurring. One thing that’s important as you move up in levels of integration, and down in levels of abstraction, is that you need a path going back for debug. Generally you have to move back to the previous level to do detailed debug. Having a very smooth flow up and down this chain of tools becomes more important.
Smith: What happened was the design team changed with this multi-platform-based design. That called for different tools. Once they got into a different structure it became obvious they needed emulation. The chip assembly guys needed a whole new set of tools, or a set of tools targeted for their use. Oasis came up as the synthesizer of choice for the chip assembly team, and some other tools really jumped in and have done very well because that’s a requirement now. They’ve reorganized, and the reorganization is pretty obvious. You have chip assembly guys, and the rest of the guys aren’t changing that much from what they were. But they were taping out and now they aren’t.
Rizzatti: But emulation has been around for 20 years. Before it was really a niche tool used by very few market segments—CPU and graphics. Today everyone is using it, and not just for platforms. Why? Because of complexity in hardware and embedded software. Emulation gives you the power that other tools don’t.
Schirrmeister: There are technical and business reasons for how companies align. The technical reason is what accuracy does the user need to make a certain decision. There is a realization that we tried 15 years ago to abstract upward to a transactional model. It doesn’t really work anymore for things like Amba, ACE, and cache-coherent interconnects in a complex ARM design. There’s no way you’ll do that at the SystemC level. You need a tool to generate it, and then you need to execute it as fast as possible with as many configurations as possible. Different versions of accuracy are required in the design flow, and the design team chooses based upon what they need to answer a question. The second reason is a business one. If a design team wants to answer certain questions, what tools can you afford to give them? So if it’s TLM versus RTL sim versus emulation versus FPGA, they all have different cost factors. Emulation becomes a valuable resource. There are certain things you need emulation for, but you don’t need it for everything. That’s why the versatility of use models is so important. You need everything from acceleration to pure verification, software bring-up, hardware post-silicon validation because you need to re-use it appropriately.
Kenney: They really want to use the appropriate engine at the right time. I don’t think it’s a battle between emulation and FPGA prototyping and simulation and virtual prototyping. They all have benefits, and design teams would like to have all of them at their disposal, depending on where they are in the design flow. They’re asking us just to make it run, but also make it look like the other three.
To view part two of this roundtable, click here.
[…] To view part one of this roundtable, click here. […]
[…] New Uses for Emulation […]