CPU, GPU, or FPGA?

Need a low-power device design? What type of processor should you choose?

popularity

Nvidia’s new GeForce GTX 1080 gaming graphics card is a piece of work.

Employing the company’s Pascal architecture and featuring chips made with a 16nm finFET process, the GTX 1080’s GP104 graphics processing units boast 7.2 billion transistors, running at 1.6 GHz, and it can be overclocked to 1.733 GHz. The die size is 314 mm², 21% smaller than its GeForce GTX 980 predecessor, which was fabricated with a 28nm process. Nvidia spent more than $2 billion developing the technology. And the graphics card runs on 180 watts of power.

Screen Shot 2016-06-14 at 5.44.20 PM

The GeForce GTX 1080 speaks to the power/performance balance that can be struck, using a graphics processing unit (GPU) architecture. But how does that compare with the attributes of a standard central processing unit (CPU) architecture, embodied in microprocessors and applications processors? Or with the field-programmable gate array (FPGA), which has found its way into high-performance computing systems, much like the GPU?

Mark Papermaster, AMD’s senior vice president of technology and engineering, touted the era of “ Plus” in a keynote address at DAC. The immersive computing experience runs on CPUs, GPUs, and accelerators, he said. Realizing this computing power is “going to take system design.”

There are advantages to each type of compute engine. CPUs offer high capacity at low latency. GPUs have the highest per-pin bandwidth. And FPGAs are designed to be very general.

But each also has its limitations. CPUs require more integration at advanced process nodes. GPUs are limited by the amount of memory that can be put on a chip.

“FPGAs can attach to the same kind of memories as CPUs,” said Steven Woo, vice president of enterprise solutions technology and distinguished inventor at Rambus. “It’s a very flexible kind of chip. For a specific application or acceleration, they can provide improved performance and better [energy] efficiency.”

Intel’s $16.7 billion acquisition of Altera, completed late last year, points to the flexible computing acceleration that FPGAs can offer. Microsoft employed FPGAs to improve the performance of its Bing search engine because of the balance between cost and power. But using FPGAs to design a low-power, high-performance device isn’t easy.

“It’s harder and harder to get one-size-fits-all,” Woo said. “Some design teams start with an FPGA, then turn it into an ASIC to get a hardened version of the logic they put into an FPGA. They start with an FPGA to see if that market grows. That could justify the cost of developing an ASIC.”

In addition to the industry-standard x86 architecture used in many microprocessors, ARM‘s architectures dominate mobile devices and are being refined for data centers and servers. There are open-source rivals to ARM cores from the open-source RISC-V, the POWER CPU architecture from OpenPOWER, and competition from AMD in the x86 arena. But ultimately the choice of device depends on the use case and applications.

“It’s a cost-performance-power balance,” Woo said. “CPUs are really good mainstays, very flexible.” When it comes to the software programs running on them, “it doesn’t have to be vectorized code.”

GPUs are much better graphical interfaces. They are more targeted than general-purpose CPUs. And FPGAs straddle multiple markets. They are even finding acceptance in data centers and supercomputers these days. “The range of codes people are writing change every month, each accelerated in its own way,” Woo said. Reprogrammable and reconfigurable FPGAs can be outfitted for a variety of algorithms, “without going through the pain of designing an ASIC.”

Peter Greenhalgh, an ARM fellow and director of technology for the company’s CPU Group, said CPUs represent “the muscle half of the device world.” On the other hand, he pointed out that for high-bandwidth computing, “GPUs are extremely good.”

Programmability, but not everywhere
FPGAs fall into a middle area between CPUs and GPUs. That makes them suitable for industrial, medical, and military devices, where they have thrived. But even there the lines are beginning to blur.

Deepak Boppana, director of product marketing in Lattice Semiconductor’s Industrial and Automotive Division, noted that Lattice historically was an FPGA company. “Today, we have a much broader product portfolio,” he said, noting the addition of application-specific standard parts (ASSPs) to its catalog.

“We are different from other FPGA companies,” Boppana continues. Lattice’s FPGAs offer “lower power, lower cost, and different form factors.”

Lattice has made a point of incorporating connectivity into its product line, according to Boppana. And through its acquisition of Silicon Image, Lattice now has ASSPs for HDMI applications and other uses. The company now provides its CrossLink bridge chip for cameras and displays, a programmable ASSP. The device operates on less than 10 milliwatts, according to Lattice, while supporting 4K Ultra High-Definition video resolution at up to 12 gigabits per second.

Boppana said this chip offers the flexibility of an FPGA with a lot of hardened IP. CPUs and GPUs typically don’t have the right type of interfaces. “CPUs are great for control path. Data path, not so much.”

Intel’s acquisition of Altera is “definitely a trend” in “acceleration of FPGAs for CPUs,” he said. The trend is to pair or integrate CPUs and FPGAs for high-performance computing applications.

Lattice, on the other hand, has “not targeted such heavy-duty acceleration,” Boppana said. “We do something smaller, on the low end.” The company’s FPGAs are aimed at consumer electronics and the Internet of Things, on the opposite end of the spectrum from cloud computing. For customers, “it comes down to their requirements” in choosing a device type, Boppana concluded. So they can pick a CPU for optimal performance. “For a fair amount of performance and wide interfaces, FPGA starts becoming more attractive. A lot of customers use both.”

, a Cadence fellow and chief technology officer of the company’s IP Group, said off-the-shelf silicon, such as ASSPs and SoCs, goes into many hardware platforms. “The choice is between low volume, high value,” he notes. “Off-the-shelf silicon is more general purpose than you can want or afford.”

Rowen adds, “For many of these applications, there are any number of application-specific products, this cellphone app processor or that cellphone app processor.”

So should designers choose a CPU, GPU, or FPGA? “The right answer, in many cases, is none of the above – it’s an ASSP,” Rowen said. “You need a hybrid or an aggregate chip.”

The industry is accustomed to integration at the board level, according to Rowen. “Board-level integration is certainly a necessity in some cases,” he said. The downside of that choice is “relatively high cost, high power [consumption].”

From his vantage point, FPGAs fill the need for low-volume ASSPs, and a CPU architecture is complementary with the FPGA. With GPUs, “you’re really choosing what segment you’re going into.” That would include graphics in two broad categories: High-performance graphics for gaming and other applications and more more embedded-style products, such as automotive and consumer-class, with a power budget of 5 to 10 watts. “There was a CPU market 10 to 20 years ago. It has been transformed into something else, targeted for a server or a Windows PC. It isn’t like the old days when I can use a generic chip. That doesn’t work anymore.”

Put simply, the processor market has become targeted, which is reflected in the application. In high-performance computing or supercomputing, for example, GPUs are typically being used in infrastructure-oriented configurations, where the I/O is aimed at scale-out, he said.

Rowen referred to the Intel-Altera combination. “Accelerators attached to infrastructure— that’s where FPGAs play,” he said. “Intel and the Altera team are collaborating there. I certainly expect to see more and more optimization for Intel server chips and FPGAs to work together.”

Intel’s Knights Landing processors are a key element in Intel’s high-performance computing strategy.

“It’s very common to use ASICs in high-volume applications, and in some application that are not high volume, but have certain functionality, he said. The downside is that it can be costly to rely on ASICs to fill a need. Companies always need to calculate the break-even point. Rowen notes there are some alternatives to FPGAs, such as the metal-programmable chips offered by eASIC. “Perhaps you move up to a low-NRE, high-unit ASIC,” he adds.

From Rowen’s perspective, the design spectrum runs from FPGAs to low-volume ASICs to high-volume ASICs to customer-owned tooling (COT).

So, what will it be: CPU, GPU, FPGA, ASSP, ASIC? The best answer remains: It depends.

Related Stories
How To Choose A Processor
There is no clear formula for what to use where, but there are plenty of opinions about who should make the decision.
The Mightier Microcontroller
MCUs will play an increasingly critical role in cars and IoT devices, but they’re also becoming much more difficult to design.
Rethinking Processor Architectures
General-purpose metrics no longer apply as semiconductor industry makes a fundamental shift toward application-specific solutions.



Leave a Reply


(Note: This name will be displayed publicly)