Experts At The Table: FPGA Prototyping Issues

Second of three parts: FPGA stacked die; application-specific debug; ASIC tools for FPGAs; changing methodologies; multiple FPGA devices; software issues.

popularity

By Ed Sperling
System-Level Design sat down to discuss challenges in the FPGA prototyping world with Troy Scott, product marketing manager at Synopsys; Tom Feist, senior director of marketing at Xilinx; Shakeel Jeeawoody, vice president of marketing at Blue Pearl Software; and Ralph Zak, business development director at EVE. What follows are excerpts of that discussion.

SLD: Are FPGAs starting to be used as platforms for a 3D stack, where you put a memory chip on an FPGA?
Feist: We just introduced 2.5D. Those are slices of an FPGA fabric on an interposer. We released a heterogeneous one with external SerDes and an FPGA. We’ve got the technology now. Where we take it will be driven by markets. Because we are general-purpose, there has to be enough of market to say it’s worthwhile to go out and build. Memory is an area that would make sense. If you can suck those into the device, you reduce the board problems of routing.
Scott: As far as hybrid ICs, it makes perfect sense. You have a yield benefit, because you don’t have to produce such a large die. You can piece smaller portions together. That’s a big time-to-market, reliability and economic benefit. The thing that’s emerging for us is the expense of software development. You’ll see a lot more high-level development tools that address multicore and the challenges of bringing up the software. That requires platform analysis tools, virtual prototype distribution and new business models—it’s not just about selling a synthesis tools.
Zak: The big thing we see from a verification standpoint is application-specific debug. You’ve got networking, PCIe, video, graphics, and you have to move the verification environment up to a level where you’re looking at video streams, packet analysis, and not just the signal. When you’re looking at chips with hundreds of millions of gates you can’t spend your time watching waveforms. You have to be able to identify, capture and reconstruct the instant something gets triggered. You have to build a deterministic debug environment. That’s part of what we’re seeing.
Scott: FPGA vendor tools aren’t enough. These system and software tools have to comprehend the resources of the multi-board systems. They have to understand the interconnect for system-level static timing analysis. They have to know about dedicated memory resources and even verification IP. That’s something the FPGA makers would prefer not to get involved with.

SLD: Still, adoption rate of more sophisticated tools for FPGAs has been much lower than what EDA companies thought it would be. Is that changing?
Scott: Yes. The chips are larger, there’s more collaboration and more IP. They have to migrate the ASIC IP into the system. False path extraction, data clock conversion—things that can really slow a system down—those are all automation-dependent.
Jeeawoody: You need to make sure you can optimize the right portions of the design. One thing we’re also seeing is that the designs are getting so big that you may have to split it into multiple FPGAs. Technology is enabling that split.

SLD: As FPGAs go up in complexity, is there a crossover where you may do part of the design at 20nm and another part at 130nm.
Feist: It’s the system architect making those decisions, typically. If you’re doing a base station, how will you architect that? There are lots of different possibilities. You can use a TI OMAP or an FPGA. There are tradeoffs made up front in terms of how long it will take to develop it. We had one customer that was using five TI chips instead of one FPGA because they would have to migrate over the code base and figure out how to put it in. It was drawing more power, but it was faster time to market. We look to the EDA side to make the process more streamlined, whether it’s high-level synthesis capabilities. What we provide is a simulator, not a verification tool. You can make RTL work with it. The design methodology needs to focus on reducing the pain, but it has to be done at the architectural level or it’s meaningless.

SLD: Is the main concern time to market, or is it performance?
Feist: Designs are evolutionary rather than revolutionary, so you need to be able to show a 3x advantage to get people to change. The reason is that they’re risking their businesses on changing their methodologies, and there is an infrastructure they have to put in place. They don’t have any hardware designers. That’s why you have to have methodologies. They have a lot of C programmers, but they don’t have anyone who knows SystemVerilog. If they’re coming from the ASIC world, that’s not a problem. But if you are coming off of standard, off-the-shelf, you have to put in methodologies they feel comfortable with.
Zak: In the high-performance computing market, people were starting to take FPGA-based boards that were PCIe-oriented, and plugging them in as a co-processor. The algorithms they’re trying to automate—financial trading, oil and gas exploration, image processing—these are all C++ programmers. That’s where they need to go directly to FPGA prototypes. They don’t understand Verilog.
Jeeawoody: It’s a more general problem. When we went from Verilog to SystemC, there were not many people who could think at that level. SystemVerilog took off because it was just an extension of Verilog. They could extend their knowledge of Verilog and apply that to the system. We’re seeing the same thing now with embedded systems, asking the architects to think at a system level.
Scott: The EDA industry is addressing more and more of the software development community. There are more embedded CPUs, and our tools have to account for the distribution of these high-speed prototypes. Often you receive a box, and there had better be a very nice interface. It sometimes is a mix of hardware and SystemC, and it has to complement what they already know. If they’ve got a multicore system and they have to figure out context switching and what threads are running on what processor, there have to be complementary tools there.

SLD: Multicore seems to be an enormous problem. Is it more difficult with an FPGA, or is it still the same?
Feist: The people building multicore systems want 800MHz or 1GHz. If you’re just doing it in fabric you’re going to get anywhere between 200MHz and 500MHz. Then you have to worry about what your interconnect will look like. If you add an accelerator in the software stack, does that slow the system down? That’s why several years ago we decided to use hardware. Otherwise you can’t do multicore on an FPGA. In the past we had a PowerPC in there with an FPGA wrapped around it. It was usable, but it wasn’t ideal. You won’t get a high-performance processing system out of that. We look to the EDA side to profiling a system and bringing up the software. There are a lot of areas where we can provide the fabric and base methodologies, but we won’t provide the entire design flow.
Zak: The challenge is to keep going up. The ITRS roadmap calls for doubling the number of processors. And those are application-specific processors, combined with software, going up 2x to 4x each chip generation. If we scale to 14nm, the typical chip will have 12 processing cores of various types—application specific and general purpose—all running software on them. You’re going to need entirely new chip architectures for the communications between them and managing all the caches. These are what our customers are dealing with. And then getting the timing neutralized so we’re not introducing new issues—that challenge is increasing multiple times.



Leave a Reply


(Note: This name will be displayed publicly)