They’re still not the right choice for every application, but the scope of where an FPGA can work is widening.
For more than a decade FPGA vendors argued that FPGAs would become a viable alternative to ASICs, adding programmability along with the same kind of advances in performance and power that ASICs saw at each new process node. While that never played out as they expected, FPGAs nonetheless have carved out a formidable position in the semiconductor market.
Generally speaking, FPGAs today are used in low-power, small form factor applications, such as handheld electronics or automotive driver assistance applications. They also have found a home in very high-performance applications, such as data center acceleration, image processing, workload acceleration, where specific FPGAs are used for very efficiently managing large workloads, offloading CPUs or processing large amounts of data.
Narrowly slicing it
While range of use cases is possible, there are slices of engineering work where FPGAs just don’t fit.
“FPGA aren’t terribly useful from a power measurement methodology because FPGAs are so power hungry themselves,” said ARM Fellow Jem Davies. “However, they will allow things to be run at very high speed. If you look at frequency, RTL simulation might be 10 kilohertz. You might have a Palladium emulator running at 1 or 2 megahertz, you might have FPGAs running at 10s of megahertz. You might get a software model that’s not very performance accurate running perhaps at hundreds of megahertz. You’re progressively abstracting away and trading off one thing for another. Running software on an RTL simulator is a specialist sport.”
While you don’t necessarily get an accurate power estimate, there is still value, said Frank Schirrmeister, senior group director, product management in the System & Verification Group at Cadence. “If you look at the FPGA-based systems, if you switch through the different power regions (which users are considering currently), from a subsystem perspective for thermal, now you don’t have the correct implementation, so to speak, because you remapped the design to RTL and you need to be very conscious about what you are doing there.”
He observed that what engineering teams do in general for low-power design is look at all the static pieces and figure those out by positioning, and so forth. Then, they look at the dynamic pieces. “In a prototyping world, getting data out of FPGAs with respect to the activity of the circuit is something users certainly intend to do. They are aware of the challenges in that. For example, you have significantly remapped your design to essentially target another technology — you are targeting the FPGA technology as opposed to TSMC 40nm, where the end chip is going to go — so there is some loss in that context, if you will, because the RTL isn’t as golden as it is in RTL simulation. However, making relative assessments is something which users attempt. I’ve seen users do things like using a high-level synthesis front end and simply looking at the implementation effects. One other way to look at that is in an FPGA, but you have to be extremely conscious about what you are doing. You need to be fully aware that this design that you’re now executing is different in structure and so forth than the actual implementation will be.”
And, based on feedback from engineering teams, Cadence expects to make some of the toggle data available for design teams to do activity analysis at some point down the road.
Further, ASSP and ASIC will still rule the extreme low-power and extreme high-performance areas using the advanced process nodes, said Rob van Blommestein, vice president of marketing at S2C. “However, the cost of doing a chip using the advanced nodes (such as 20/16/14nm) is so high that fewer companies or applications can afford such an investment. That means there will be more and more designs that almost have to rely on FPGAs even for high-performance, low-power applications.”
Bryan Ramirez, FPGA Product Technologist at Mentor Graphics, noted that as mobile devices have proliferated and changed the world as we know it, it has forced wireless networks to grow substantially to handle the increased traffic demands, carving out a new role for FPGAs. “More and more applications and compute infrastructure are moving to the cloud, putting a significant burden on the telecommunications infrastructure. System bandwidth and processing capability have been forced to grow rapidly to satisfy these demands. FPGAs are well suited to these applications and play a critical role in feeding today’s data hungry and always-connected societies. FPGAs have always excelled at signal processing because of their massive parallelization capabilities, and thus are widely deployed in wireless networks migrating from 3G through LTE Advanced to NxN LTE. Because of the growth of mobile in urban areas, FPGAs are used in mobile backhaul infrastructure providing low-cost, low-power wireless aggregation of cells. The signal processing capabilities of FPGAs have led to wide adoption in digital video applications as the industry transitions from 1080p HD, to 4k and eventually 8k formats.”
In addition, he said that as more and more “cord cutters” leave behind traditional TV and cable options in favor of streaming video over the Internet, this is putting possibly even greater strain on the networking infrastructure.
These higher-performing networks are often enabled by FPGAs. As wired networking performance requirements expand to 100Gb/s to 400Gb/s and up to 1Tb/s, FPGAs are often the only viable solution at these mind-boggling bandwidth requirements for applications like traffic managers and OTN transponders, Ramirez said. “It used to be that performance and area were the only QoR (quality of results) of concern when developing FPGAs. In the past few years, power has become an increasingly important concern, but power is more than just a concern for extending battery life. It affects usable performance of the device, poses thermal challenges, reduces reliability, and increases operational expenses as power demands increase. Power is a concern for any application, and thus power consumption in FPGAs has become a primary factor for deciding which is the right device for the application.”
The problem is that the additional logic and functionality needed to make FPGAs flexible and reprogrammable come at the cost of increased power consumption and reduced clock frequencies compared to the ASICs that they are often trying to displace, he explained. “FPGA vendors are constantly innovating to push the total power lower with each successive generation of devices. This is accomplished through a combination of lower power semiconductor processes, improving software to be power-aware in how it implements the design and makes tradeoffs of power vs. performance vs. area, providing specially binned ‘low power’ devices, and hardening high performance IP blocks. FPGAs are also beginning to implement ASIC-style power reduction techniques like clock gating. Finally, size and flexibility of FPGAs are enabling overall system power to be reduced by integrating multi-device solutions into a single FPGA or SoC FPGA.”
Bottom line: FPGAs are still not the best choice for every application, but they do continue to mature in sophistication and usefulness.
FPGAs are very useful and more flexible than ASICs. I guess that’s why Intel and Microsoft start using FPGAs in their servers. Hopefully SoCs with processors + FPGAs will be released in the coming years. The main problems with FPGAs are the toolchain and the static power! In comparison with C/Java toolchains, VHLD/Verilog toolchains are out of date. Fortunately new players like Synflow or MyHDL are offering more modern & efficient frameworks. Clock gating to reduce power would also be very good for this technology!