Experts At The Table: SoC Prototyping

Second of three parts: Hybrid approaches; the software developer; error injection; coverage-driven validation.

popularity

By Ann Steffora Mutschler
System-Level Design sat down to discuss SoC prototyping with Hillel Miller, pre-silicon verification/emulation manager at Freescale Semiconductor; Frank Schirrmeister, group director, product marketing, system development suite at Cadence; and Mick Posner, director of product marketing at Synopsys. What follows are excerpts of that conversation.

SLD: Is it possible to have out-of-the box hybrid systems that contain emulation and FPGA/virtual prototyping?
Posner: Hybrid technology, which we did launch at DAC last year, is our integration of Virtualizer [virtual prototyping tool] with HAPS [FPGA prototyping system], and that is a hot topic. Quad core was one thing. Single core is where it started, then it went to dual, then to quad, but now it’s gone to 16 quad cores. It’s blown out of proportion and that’s stressing simulation, emulation, and for FPGA-based prototyping you’re always capacity limited. Even with our new systems with up to 144 million ASIC gates, physically you’re not going to put 16 quad cores onto that. Can you put two sets of quad cores so at least you can test your multi-processor software across the multi-quads? Yes, but it just doesn’t scale.
Miller: The problem with that is when the software guys develop the software they also want to know that it meets performance. So if they can’t put it on the full SoC, they’re not going to know.
Posner: This is where hybrid [technology] is fitting in. We have customers who are modeling these multi-quad-core systems but the heavy multiprocessor is modeled in the virtual world. You keep your real world interfaces, your image processing, your video on FPGA, and the reason is you’ve got physical requirements that you want to test for the video processors. Yes, you can create a high-level model, but you have to abstract it so high you lose all of that. So the hybrid is a great technology that is maturing all the time.

SLD: That’s great, but is it enough?
Miller: No. First of all, in my mind it’s still a concept. In our networking chips where we have large amounts of general-purpose, it’s still a concept. I have to see it working before I say it’s going to work. So the software guys are not too happy with it. Software guys are FPGA-driven; you can’t tell them anything else but FPGAs.
Schirrmeister: To Hillel’s point, obviously I do agree, it’s still very, very early, but we do have customers who’ve start talking about seeing results. AMD talked about it that. They got between 2 and 20 times in terms of speed up. We had NVIDIA talking about it. They got real results. And it depends on the application. If you have a heavy graphics-type application, as in the case of AMD, and you can virtualize some things away, that gives you better balance. It saves you some capacity and it balances the speed out. Hillel’s point was interesting, particularly the performance aspect. You lose part of it if you take the fast models—you lose the performance aspects on the software side, which includes things such as deep pipelining and so forth. So some of the guys that really need that on the software side, they won’t be satisfied. But then they look at emulation and say that the speed is not fast enough. Then they try on the FPGA prototype and they say they can’t look into the hardware. There are two things this is leading to. One is this notion of the hybrid, which is the connection between the different engines but also the path through, meaning, how I go from engine to the next engine and sometimes more importantly, if I find something in an FPGA prototype, how can I actually reproduce this in an early environment? That’s why, even in the emulation world, we introduced technologies last year that sound like a play on words: ‘in circuit’ but ‘acceleration,’ so you split the clocks on Palladium between the pieces that have to run at full speed (full speed from an emulator perspective goes to a megahertz) from the simulation/acceleration pieces, which is the slow10k, 100k kind of thing. If you separate them and have back-channel interfaces to get data back and forth, you can actually do interesting debug things with that.
Posner: The issue there always is that you’re abstracting away from what the design is. And it’s a tradeoff.
Miller: I have a big issue with what he’s talking about. You guys talk about the spectrum of testbenches and we just don’t have the resources to manage them. It’s a big problem. I can go to the software guys and talk about the abstraction requirements. I can collect that and build a list and come up with a block diagram that will satisfy them for their FPGA, but that’s a lot of work.
Schirrmeister: We realize you cannot build it for every environment, but you should be able to re-use it and go back and forth between them—to have the same environment that maps to both hardware engines, for example. But I would agree with you, it’s still in the very early stage.
Posner: Hybrid does actually have a kind of sweet spot in a few areas, one of which is IP validation. Synopsys has all this interface IP, but it’s being deployed into a Freescale system or something else, and one of the things customers have always pushed for is that they want hardware validation. We always do HAPS FPGA-based prototyping on all of our IP, but now it’s shifted up, as well. They want example firmware—the software that goes with that IP. That’s going to be very system-specific. You need to create something for ARM, for MIPS, for ARC, so hybrid actually really comes alive for IP development. Your IP still sits in your hardware, so you’re still doing your hardware validation, but you can create a virtual system of your ARM-, your ARC-, your MIPS-based system, develop your software on that. Then you have basically ready-to-go software for deploying with the IP.
Schirrmeister: I agree with Mick on that. Perhaps we’re not controversial enough for this discussion, but just to hit back at him a little bit we actually will present this DAC James Pangburn has a paper exactly about that topic—how to use virtual platforms for IP validation.

Schirrmeister: There’s one thing I wanted to ask you, Mick. I liked your separation of verification and validation. One of the things we are seeing, and I’d be curious about your answer on the FPGA side. It is increasingly difficult where while validation is a necessity. I need to be able to show my chip comes up with USB, it comes with Bluetooth. That’s the validation piece. So then I have a set of test scenarios, and now it’s actually becoming from a testing perspective quite difficult to have all the scenarios you really want to test and validate to execute all those. In my worst case scenario, if I have a USB and I want to figure out if it works with all my USB sticks, in our case we have these speed bridges for both Palladium and for RPP. You plug a board into the hardware accelerator and then you plug in your USB stick and you wait. The software rattles and you hope you don’t hang…and it works, so you verify. What we see increasingly, though, is the need to inject errors on the validation side. So what we’re doing there (and so does Synopsys) is working with companies like LeCroy and Rohde & Schwarz. That’s kind of the hardware test, but increasingly we see people looking at to determine if they can take their system environment and model the system environment in a way they can inject errors. On the RTL side it was easy. It was a testbench, which was just really slow. So now you try to take that piece and move it back into the hardware.
Miller: There’s a new concept called accelerated VIP, which is taking what you traditionally would have done in simulation with a BFN (beam-forming network) —taking some of that functionality of that BFN and putting it in a synthesizable testbench. We’ve been doing that for many years for certain protocols. It’s been very difficult, so maybe the industry is now moving toward providing that. To write an error injection synthesizable is a very easy task. You just toggle a bit every once in a while and you’ve got that. But people are looking for more sophisticated abilities to inject traffic at different levels and then validate those different levels. For example, there can be a PCI Express built inject, not just loop back. That’s just on the testbench side. You say people are interested in more and more coverage. Essentially you’re talking about coverage in the validation process. On the test side, we’re seeing a need for things like UVM. UVM is a testing methodology for the simulation world. There’s no real testing methodology for the validation world, and we have in-house things. But the industry has not addressed this problem yet. There is no standard way for doing coverage-driven validation.



Leave a Reply


(Note: This name will be displayed publicly)