Experts At The Table: FPGA Prototyping Issues

Last of three parts: Tradeoffs in design; minimizing the number of bugs; limits and advantages of FPGAs; migrating ASIC tools and IP to FPGAs; programmable logic to programmable platforms.


By Ed Sperling
System-Level Design sat down to discuss challenges in the FPGA prototyping world with Troy Scott, product marketing manager at Synopsys; Tom Feist, senior director of marketing at Xilinx; Shakeel Jeeawoody, vice president of marketing at Blue Pearl Software; and Ralph Zak, business development director at EVE. What follows are excerpts of that discussion.

SLD: As we go forward, there are more tradeoffs. What do design teams need to do differently in the future?
Feist: There are a lot of what-ifs. What if I set up a memory architecture like this? Another challenge in developing these systems is getting the architecture right. Once I’ve committed to silicon, that’s a big cost. Did I get the interconnects right? Is my cache depth right? Do I need a level 1, level 2, level 3 cache?
Zak: You have to look at the complexity level and what’s driving prototyping. It’s a requirement. You have to wring out all these problems out front. There is no other way to do it. We’re used to thinking of RTL simulation as 50 cycles per second. I’ve seen chips that were in the single-digit cycles per minute. You can’t simulate the whole chip. The major blocks are so complex they’re hard to simulate.

SLD: Chips are buggy today, no matter what. Even IDMs are fixing chips after rollout.
Scott: These prototypes may reach 70% to 80% of their test coverage in simulation context and then transition to prototype. In those accounts, you have the systems engineers—not even C++ but SystemC—deploying these very early models of the system. The prototyping person needs to be able to communicate those technologies to connect them, but they’re both clearly in the signoff path. What can programmable fabric bring to the party? There are variants, but what if you can extend an architecture by adding a processor subsystem or a special video interface? You may use standard platforms and then just extend that.

SLD: Do FPGAs add another debug capability because the hardware is programmable?
Zak: Yes. The problem is that you’re limited to how much you can squeeze into an FPGA, but it’s at least an order of magnitude greater for logic gates. That will affect your die size and how much programmable logic versus custom logic is in a device. Over time you probably need to move to some sort of hybrid.
Feist: The challenge is that even to fix it, you have to figure out if the programmable fabric is connect to the part that’s broken. Can you actually get to the parts? The chances that you’re going to have the interconnect to that part that you’re trying to fix is almost zero.
Scott: For applications like mil/aero you need triple modular redundancy for SEU (single-event upset) state machines. Rather than trying to figure out which three of these register spaces is correct, the two that say they’re correct could improve the overall bug detection.
Zak: It may be tough to say something occurred in a programmable fabric and it can actually be fixed there. It may be because gates are becoming almost free—at least in one respect. With power management schemes you put in redundant circuits so that if you have a failure in one you have another. We may start seeing more redundancy in the critical parts of circuitry.
Feist: We’re already seeing that in applications where they can’t afford any down time. Now, even in the data centers, they’re requiring it.
Scott: It’s not just mil/aero. FPGA technologies are being applied for ASIC debug. If you go back into emulation and prototyping hardware you’ll see technologies that attach to either a custom or commercial prototyping system to extract system states to achieve near-emulation types of capabilities. Because you need to re-instrument a design in multiple places you need reprogrammability. It’s another interesting application to attach arbitrary probes.

SLD: If you get everything right, you should have less debugging, right?
Jeeawoody: Yes, but it’s so complex that it’s hard to debug anything up front. That’s why we have these complex debug systems.
Scott: A couple years ago I went to one large vendor’s Web site and printed out the errata for one of its platforms. I thought it was going to be a couple of pages. It was a full ream of paper. With all the methodologies that we provide, there are still huge numbers of errors. And those are well-used, wrung-out parts used in a lot of systems around the world.

SLD: Looking out five years, what happens with FPGAs and the tools that are out there?
Scott: You’re going to involve more people and more IP, so there will be more bottom-up methodologies for divide and conquer. There will be more ASIC IP. That will migrate to the FPGA domain. And you will still need this robust timing closure, which will have to be solved at the RTL. You’re not going to be deep into static timing analysis trying to solve something at the gate level. It will have to be handled at RTL or higher. What we consider ASIC-caliber tools will be pulled in.
Zak: The implementation software is going to get extremely complex. If you start migrating it down the technology curve you’re going to run into timing and electromagnetic issues in terms of crosstalk in an FPGA that you normally deal with in a custom ASIC. The tools will have to get that much more sophisticated. It may get to the point where there are so many gates that dies will shrink. You may only use them in prototyping applications.
Jeeawoody: We’re also going to see more and more IP blocks—hundreds of them. Each block probably has been analyzed individually. When you assemble them all together, the inter-block analysis will become really important. That’s going to be particularly important at the interface level. You need to ensure you get the most utilization and performance.
Feist: As you go down Moore’s Law, the cost per gate is not tracking anymore. As a result, you’re going to see platforms, either in 3D technology or hardened blocks. There will still be a place for glue logic, but you’ll see more platforms in the future for market segments. You’ll see fabric to differentiate those, as well. To grow the market, it won’t be just programmable logic. It will be programmable platforms. We’re looking at how to suck more of the bill of materials into our platform. Process technology and 3D designs will always be in the programmable space. There will be more platforms in the future rather than just process. Even today you could build and program 80% of a device and give it to someone to differentiate. Economically, if you’re going against an ASSP or an ASIC it’s still too expensive for high-volume applications. If the market is ripe, why not harden things? But most people wouldn’t build it all themselves. That’s what’s so attractive about 3D technology. It’s challenging, but it opens the door to a lot of possibilities for FPGA vendors. It’s no longer just a gate array.