Experts At The Table: FPGA Prototyping Issues

First of three parts: The need for speed and more complete tools; free tools vs. ASIC-level capabilities; timing closure problems; ASIC prototype vs. FPGA as the final product.


By Ed Sperling
System-Level Design sat down to discuss challenges in the FPGA prototyping world with Troy Scott, product marketing manager at Synopsys; Tom Feist, senior director of marketing at Xilinx; Shakeel Jeeawoody, vice president of marketing at Blue Pearl Software; and Ralph Zak, business development director at EVE. What follows are excerpts of that discussion.

SLD: Where are the problem areas in creating FPGA prototyping and FPGA platforms?
Jeeawoody: One of the missing pieces is a complete tool set that’s easy to use, and which gets you from A to Z quickly. Constraining the design has been an issue for a lot of people. Automating that process is key.
Feist: Designers are using the biggest and baddest devices out there and they’re usually early in the design cycle. If they’re doing an ASIC, they’re doing a big device. They want everything to work perfectly. And they’re usually the early adopters of the latest process nodes. The vendors find some hiccups, and they’re maturing it as the FPGA prototypers are in the middle of the design. The challenges they face help us bring up our devices and tools along the way. In terms of design flows, it doesn’t matter how big of a device they build. They typically have to spread it across multiple devices, so there are a lot of partitioning challenges behind that to make it easier. But some blocks just don’t break up easily. You run into things like time-domain multiplexing between different I/Os. The prototyping market usually isn’t running at the highest frequencies that will be used when chips go into production. That makes it a little easier. But the biggest challenge involves the people building boards.
Zak: There are a couple challenge areas. One is getting designs into FPGA-based systems, with hundreds of devices. That’s one of the biggest challenges in EDA because you have to take RTL, or in some cases people will be starting with higher-level models, and then you essentially have to push a button that reaches all the way down to timing-correct physical implementation. The better the speed, the more advantageous it is to the software developer.
Scott: The bring-up time is really a profound problem—code migration, the substitutions you have to make in migrating ASIC-style code into an FPGA. The emulation community has made compilers that are more compatible and which can accept ASIC-style coding conventions. But as people really try to achieve the 50MHz to 100MHz kinds of speeds on these prototyping platforms, you really need to use a more high-performance implementation. That means more automation, data clock conversions, and memory substitutions to get the design partitioned so there is no signal contention. That’s one side of it. The other side is that people rely on custom boards. They look at the bill of materials and say they know how to build those. There’s strong inertia to just keep building their own systems, but the practical matter is that the ROI—particularly applying them to multiple projects and scaling them—that’s why we’re seeing a trend toward more off-the-shelf solutions.

SLD: There’s been a lot of talk that to do a complex FPGA you’ll need ASIC-style tools, but the free FPGA tools are still extremely popular. Is that changing?
Jeeawoody: We’ll see the change as people address timing. Timing and timing closure are becoming more difficult as chips get bigger. In the past you could just do synthesis and route. We see that as another barrier—getting FPGA designers to think about timing. That’s where standards and how to use them are becoming important.
Feist: Anyone who uses an FPGA has to use the vendor’s place and route. Beyond that, they can choose. This is an area where the standards will help thing. Xilinx has been very proprietary in the past. The tool chain was about 15 years old, and if it wasn’t invented at Xilinx at that point then it didn’t get used. That’s why we’ve redone the tools and tried to use every standard that is out there. Traditionally the guys doing the ASIC emulation don’t know what a UCF constraint is. They were having to convert IP they purchased into something that could be constrained to be timing critical into our format. It had to match up with UCF.
Scott: You’re trying to do two things. You’re trying to maximize the actual runtime speed of the system in a timing-neutral environment. But you’re also trying to minimize the critical path across the prototyping. That may include multiple boards and multiple chips. What you’re doing is neutralizing the timing impact and running a purely functional verification environment. You also want to maximize its runtime speed, so you identify the critical paths, shorten the timing delay and actually create a new critical path, and then iterate to get to the fasting implementation. That’s one of the key challenges—getting that right.

SLD: For a long time, the idea was to do a prototype and roll it back to an ASIC. Is that still the case?
Feist: We do see that, where people start and FPGA and plan to move it to an ASIC. The crossover points have changed dramatically, particularly at 28nm. We’re now including a full processing system, which makes it quicker to customize, too. It’s not quite an Atom processor but at least you can bring up your software. There was a programmable IP company where the vision was that customers would have to do many versions of a chip. As the cost started going up, they decided to produce one die and put different part numbers on it. Others have started doing that. It has to do with the lifecycle of the end product. Consumer product cycles used to be nine months. They’re now six months.

SLD: Doesn’t that also fragment the market so you can’t get the same kind of volume from one chip?
Feist: Yes, and the way to get more volume is with these hybrid chips. If you have a really long lifecycle, you may start with an FPGA and migrate it to ASIC. Or you may have really high volumes but you’re time-constrained. Before you can start with an ASIC implementation you’re onto a new product. I don’t think we’ve seen much of that, but there is potential for that to become part of the trend.
Jeeawoody: 28nm seems to be the inflection point. There’s always a balance between cost and an FPGA. But at 28nm they’re more comfortable doing production FPGA designs.
Feist: We track on a quarterly basis design wins and we look at who we competed against. It’s hundreds of ASIC replacements per quarter. That’s the growth of the market.
Zak: The other side of this is that you’re seeing a change in the industry and the dynamics of who the players are. Fifteen years ago we were looking at 12,000 ASICs a year and about 4,000 standard parts. We’re still looking at the same number of standard parts, but the ASICs are down to about 1,500 a year.

SLD: Have the FPGAs picked up a lot of the slack?
Zak: It’s that plus the standard parts have all become platform chips. They’re all becoming SoCs. Broadcom and Qualcomm are building what essentially are ASICs, but they’re selling them to multiple vendors. If you don’t have enough volume to build your own parts, you buy from Broadcom, Qualcomm, TI and others. The dynamics of the semiconductor industry have changed.
Scott: PLD and FPGA vendors have been invited to the party. They’re fixing ASIC problems. Power has come down enough. And there’s a lot of hardened embedded functionality.
Feist: The FPGA of 10 years ago is not the same as an FPGA today. Today FPGAs have analog/mixed signal capabilities, A-to-D converters, serial I/O and block RAM and distributed RAM. If you think about an FPGA five years ago—a field-programmable gate array—that doesn’t apply anymore. But every piece of this is still programmable.

Leave a Reply

(Note: This name will be displayed publicly)