Experts At The Table: SoC Prototyping

First of three parts: Technology drivers; use cases; blurry lines; hybrid solutions.

popularity

By Ann Steffora Mutschler

System-Level Design sat down to discuss SoC prototyping with Hillel Miller, pre-silicon verification/emulation manager at Freescale Semiconductor; Frank Schirrmeister, group director, product marketing, for the system development suite at Cadence; and Mick Posner, director of product marketing at Synopsys. What follows are excerpts of that conversation.

SLD: How is SoC prototyping used today and what is the biggest growth potential for the technology?

Miller: We started with serious virtual prototyping about five years ago with our first generation products P4080. We engaged a company, Virtutech, and they built a model of that SoC. It was a big effort. It cost a lot of money. So we did that, and then we stopped doing those types of things because the cost was high and then Virtutech got bought out by Wind River, which got bought out by Intel and didn’t make it any better. What we’ve been doing is a lot of Palladium. It starts off with a lot of performance validation so we have these benchmarks. We have a whole list of benchmarks that we need to get running and it’s not an easy task because it involves many teams that you have to coordinate and drive. You have to drive them to get answers, to help you debug scenarios to understand why you’re not meeting the performance goals. It’s a very interesting activity but it takes a long time and it’s at the point today where it actually gates tapeout. We don’t tapeout without a clear understanding of the performance. It’s used again when silicon comes back. In silicon if they can’t meet performance, you go back to the Palladium and use RTL validation and in that validation flow, we use bits and pieces of software so we have the software guys. I would say the software guys are in the beginning stages of using the emulator. For the next generation, there is a strong drive to move towards FPGA [prototyping] because the software guys really have a need to get a 10X improvement in performance. The concern about FPGAs at this point in time compared to the Palladium – the Palladium is an emulator where you work top-down. You can take an SoC, put it in the box and you can get it to work. FPGAs, especially where there’s a requirement to be 10X faster, you have to really have a disciplined design process, which is called ‘Design for FPGA,’ and you have to work bottom-up. The concern is that additional effort you’re going to put on the different design teams—and design teams do stuff only when they get something out of it. If they’re not going to get any advantage of using an FPGA and it’s going to be a software thing only, it’s going to be an issue. We are trying to figure out how to think up a bottoms-up approach with FPGAs, how to get the design teams involved, how to get them excited by [the ability to run much faster], the chance to find corner cases is much higher, better turnaround time when we’re developing our software test cases—those types of things.

Schirrmeister: There’s no ‘one size fits all’ engine so the key drivers for prototyping are really verification of the hardware –you need to get it right before you tape out; and then increasingly, it’s the software that really drives the whole context of it. The way we see our customers using it is really this continuum of different needs, where you switch when your need becomes big enough. For example, virtual prototyping—there was a need—and companies still do that a lot to get the software developers off their back, to give them something in parallel. The big examples which are out there are things like TI OMAP, which used some of the tools Synopsys acquired from Virtio. That was all to satisfy the software developer early, but it wasn’t hardware-accurate. If you need hardware verification you go to RTL simulation first. But then, that’s all very accurate and nice, fast turnaround for debug. But for software developers it’s not even cup-of-coffee interactive. They wouldn’t touch it. You can force them perhaps for very low level, bare metal drivers to do something like hook up an ISS. If you need more and more software then you need to really see where your RTL stands. When customers decide, they use an emulator in cases where you really need multiple RTL drops per day, so that’s the phase where the RTL isn’t stable yet. Arguably, the software developer won’t be able to actually deal with that fast-changing RTL so you need some interface constant on the registers. But then, if the RTL isn’t fully stable yet, then the emulation flow makes sense. But as Hillel pointed out, at one point you need a 10X to really do software. That’s when FPGA-based prototyping comes in and saves the day. After optimization you’re getting up to 20, 30 or 40 megahertz. That’s where you get the classic software development. Where you pay is that you don’t have full insight into hardware anymore, so the software developer really sees it like a board, just like earlier, and it probably still takes longer to set up than emulation. But those are really the tradeoffs you make, and we see that across the board and it extends into the real chip.

Posner: We’ve always looked at the use cases from a high level: verification vs. validation. With verification you are checking that something meets the spec—perfect for a simulator, perfect for emulation. You are checking a spec to a protocol based on test stimulus. That’s versus validation, where you’re checking that it meets the user requirements, which could be performance or the graphical user interface. But there isn’t a set point where you say, verification has ended and validation is starting. It’s a blurry line, although there are things that will push you in that direction. As you move through your verification tasks, you’re going to end up doing IP integration. At that point you’re going to start integrating some software. If you’ve integrated PCI-Express, Ethernet, USB, there’s going to be low-level software. It’s not all a hardware protocol. And those drivers you may not be developing yourself. You’re somewhat doing a validation task where you’re just testing that they run on the IP. That’s where the blurry line comes. But then when you move further into things like system validation, now you’re actually plugging in that USB connection, that Ethernet connection into something else, checking interoperability. Then you’re in the world where FPGA-based prototyping comes alive because you’re in an area that requires performance that you would love to get in an emulation environment but you can’t. All of these protocols have minimum clock frequencies typically, so you have to meet 125MHz for PCI Express. You have to meet 62.415MHz for USB3. That’s what pushes you more into the FPGA space. At the same time you’re then splitting out—you’re doing hardware/software integration, system validation and software development. And still, the FPGA-based platforms predominantly are used for software development. But again, it’s a bit of a mix. You’re still doing hardware/software verification because you’re testing the bits together. Sometimes that’s the first time where multicore may be really coming alive—physical hardware interrupts to the software.

SLD: Is there a sweet spot in terms of use models?

Schirrmeister: Yes, there are side use models for each of the technologies, but if I look at the pure sweet spots for each technology, it’s really software development. That’s when FPGA fits in most, and virtual prototyping very early. For verification of the hardware, that’s where the traditional RTL simulation comes in, acceleration, emulation. And then the hardware today is so complex that at least the bare metal pieces of the software need to be brought up in emulation and even in RTL simulation, as well, because it simply doesn’t do what it’s supposed to do without the bare metal software. So it’s a question of software development. We actually see the early virtual portion go to the higher-level aspects.

SLD: How does the engineering team decide when to adopt emulation versus FPGA prototyping versus simulation?

Posner: The kick-over point is usually performance. If you can’t get the job done in the technology you have, you move to the next technology. And if you ask a software engineer how fast they want it, they want it real time. Ultimately it should run at the same speed as the end chip. You’re never going to get there.

Miller: One of the things we try and look at is—like you have cars that are hybrid, you have an electric motor and you save on fuel—we are trying to look at how to hybrid things like very fast virtual models for ISS and Palladium or maybe even simulation. I haven’t really seen these multicore chips that have 32 cores, but you can instantiate 32 fast models. We haven’t succeeded with it today but we are attempting it. Still, you look at the math and you don’t really see where you’re going to get the gain in performance.  You’re definitely going to get the gain in gates, but can you build a model that is going to satisfy the software engineers. It’s an exploration, but it would be nice to know that the EDA industry is actually coming with a hybrid solution that works out of the box.

Schirrmeister: It’s all about the speed. What’s happening in emulation is there are certain tasks that just inherently are done very well in hardware, which are the parallel tasks. So for a graphics engine, can you abstract it? Yes, you can abstract it to the APIs that do the rendering, and then you can remap it into the native host environment. But it’s not really executing what you’re implementing in your graphics piece of the chip. Those inherently parallel things—video being another dominant one—you’d better do them in a hardware-based acceleration like emulation or FPGA. And then in the front for the software you put the fast models from ARM or whomever. That’s definitely a trend we see.



Leave a Reply


(Note: This name will be displayed publicly)