The Battle To Embed The FPGA

It has always seemed like a good idea but nobody has managed to pull it off so far. Will one of the recent contenders be successful?

popularity

There have been many attempts to embed an FPGA into chips in the past, but the market has failed to materialize—or the solutions have failed to inspire. An early example was Triscend Corporation, founded in 1997 and acquired by Xilinx in 2004. It integrated a CPU—which varied from an ARM core to an 8051—memory and the programmable fabric into a microcontroller. This allowed the function to be changed or extended to whatever purpose the buyer intended. It was a tradeoff between unit cost and lower NRE.

Another example was Stretch, which attempted to build extensions for Tensilica products. The goal here was to extend the processor architecture so that it could be tailored automatically by software programmers to address any emerging compute requirements in applications as diverse as consumer, medical imaging, military and networking equipment. It proved to be too difficult a task.

“There is an impedance mismatch – you optimize architectures differently in an FPGA and an ASIC,” says Drew Wingard, chief technology officer at Sonics. “FPGAs tend to be frequency challenged, so you tend to go wider and slower and exploit parallelism. Typically, it means that it is difficult to tightly couple an FPGA in the execution path of a CPU. This is what Stretch found out. You cannot afford to slow down the execution path of a CPU. So for an accelerator, it has to be loosely coupled, not something that is adding primitive instructions that have to complete is a single cycle. Instead it would need a variable number of cycles to complete.”

Geoffrey Tate, CEO of Flex Logix, adds a larger list of failed products: “Actel, IBM, Leopard Logix, LSI Logic, Tabula, Velogix, and probably more. The reason why they were not successful is not clear, but it seems to be a combination of a lack of focus (chip vs. IP model), low density (not as good as FPGA chips), lack of availability in the process node required, lack of willingness to develop a new market, and poor software.”

Perhaps the most viable reason for the adoption of FPGAs in the past was when you were developing an ASIC that implemented a standard that wasn’t quite finished. Everyone was anxious to get the product out, but the spec was not yet finalized. So you could put part of it into an FPGA, and as the spec continued to evolve you could update the logic appropriately. This was then designed out in the next generation of the product to reduce cost.

Another reason was for bug fixing. If you had a piece of logic that you were uncomfortable with and could not completely verify, then you could add some insurance. An example of this was DAFCA, which placed small amounts of reprogrammable logic around a chip that could be used to fix simple problems, such as a parity inversion or isolate a block that did not work. It also could act as on-chip instrumentation.

To date, solutions that deploy an external FPGA have been more successful than those that attempted to integrate it. Robert Blake, president and CEO of Achronix, provides his view of the evolving role of the FPGA. “In the early ’80s the chips were fairly small, low in performance, and used as TTL glue logic integration. They replaced multiple logic parts on the board. That brought the market to around $500M. In the next phase, FPGAs grew with Moore’s Law and got to be bigger, faster and cheaper. As the complexity grew, the main application that took them to $5B was associated with connectivity. I/O standards did not always connect well, so FPGAs were used to bridge the connectivity. FPGAs were used broadly in networking infrastructure equipment.”

Blake sees that we are entering a new phase. “They will start to get used as a co-processor. FPGAs are very good at building arbitrary width datapath engines. This means they can be used for datacenter exploration, adding significant flexibility to software-defined networks. And as 5G infrastructure rolls out, the FPGAs will be used heavily for the digital front end and for customization of different marketplaces.”

What has changed is that the use of the very latest process nodes is slowing down in many markets. This is causing some of the financial equations to change. The previous generations of process nodes have many levels of interconnect which enables FPGAs to be denser and longer product iteration times mean that more flexibility may be required in them to extend their lifetimes.

At the same time, process nodes that are now being favored, such as 40nm and 28nm have large amounts of logic available making the overhead of the FPGA less of an issue.

But I can’t help thinking that it is the software tool chain that once again will be the make or break of these solutions. They have to be easy to program, and the amount of time, money and effort that Xilinx and Altera have spent on their toolchains is an indication of how difficult this is. Can an FPGA be wrapped into an IP model with all of the necessary software and still be a viable business? It will clearly cost more than a CPU to develop and maintain, and even more than the variable instruction set processors such as Tensilica and ARC.

Good luck to all of them. But I hope they constantly look over their shoulders at the two big guys who may just start making FPGA chiplets available as soon as 2.5D integration gets a little cheaper.

Related Stories
Embedded FPGAs Going Mainstream?
Programmable devices are being adopted in more market segments, but they still haven’t been included in major SoCs. That could change.
CPU, GPU, Or FPGA?
Need a low-power device design? What type of processor should you choose?
FPGA Prototyping Gains Ground
The popular design methodology enables more sophisticated hardware/software verification before first silicon becomes available.