GDDR Accelerates Artificial Intelligence And Machine Learning

Getting enough bandwidth to meet the demands of ever more sophisticated AI/ML applications.

popularity

The origins of modern graphics double data rate (GDDR) memory can be traced back to GDDR3 SDRAM. Designed by ATI Technologies, GDDR3 made its first appearance in NVidia’s GeForce FX 5700 Ultra card which debuted in 2004. Offering reduced latency and high bandwidth for GPUs, GDDR3 was followed by GDDR4, GDDR5, GDDR5X and the latest generation of GDDR memory, GDDR6.

GDDR6 SGRAM supports a maximum data transfer rate of 16 Gbps. GDDR6 also offers higher densities compared to previous-generation graphics memory, while doubling the speed of GDDR5 and delivering more than 5X the 3.2 Gbps speed of DDR4. In addition, GDDR6 offers the same low external voltage (1.35V) as GDDR5X, although it is based on a dual-channel architecture instead of GDDR5X’s single-channel architecture. GDDR6 also adds 1.25V I/O voltage option for applications not needing the maximum speed grade.

Artificial intelligence and machine learning
GDDR6 currently powers a wide range of applications across multiple verticals, including artificial intelligence (AI) and machine learning (ML). For example, the recently launched Speedster7t FPGA family is designed for AI/ML and high-bandwidth workloads. The new lineup of FPGAs packs a 2D network-on-chip (NoC), a high-density array of machine learning processors (MLP) and support for GDDR6 memory.

More specifically, each of the GDDR6 memory controllers can support 512 Gbps of bandwidth. In real-world terms, this means the up to 8 GDDR6 controllers in a Speedster7t device can support an aggregate GDDR6 bandwidth of 4 Tbps! This capability is essential for high-performance compute and machine learning systems which require high off-chip memory bandwidth to source and buffer multiple data streams. Indeed, as Rich Wawrzyniak, principal market analyst for ASIC & SoC at Semico Research recently noted: “In AI/ML applications, memory bandwidth is everything.”

The future of GDDR
As we pointed out earlier in this article, GDDR6 is currently running at speeds up to 16 Gbps, although the semiconductor industry is already eyeing higher speeds for future GDDR standards. If historical trends are followed, the new target will likely be somewhere in the range of 32 Gbps (at the high end), which is necessary to meet the demands of ever more sophisticated AI/ML applications.

Subsequent generations of GDDR are also expected to deliver higher bandwidth rates for use cases such as self-driving cars with increasing levels of autonomous capabilities. To be sure, most vehicles currently on the road typically utilize DRAM memory solutions with bandwidths of less than 60GB/s. However, future self-driving cars and trucks that assimilate and process terabytes of data will require memory that supports significantly higher bandwidth rates. This additional bandwidth will help enable autonomous vehicles – which rely on incredibly complex AI/ML algorithms – to rapidly execute massive calculations and safely implement real-time decisions on roads and highways.

The challenges of GDDR
As we previously noted, GDDR6 has emerged as an effective mainstream memory solution for high-performance AI/ML and networking applications. Packaged like a standard DDR, GDDR is designed to move data at extremely high rates. From an architectural perspective, GDDR offers a balanced trade-off between bandwidth and power efficiency, as well as ease of use, reasonable cost and reliability. Perhaps most importantly, GDDR leverages traditional manufacturing methods that do not involve intricate stacking techniques.

However, GDDR does present various signal integrity challenges for systems designers. Put simply, signal integrity (SI) is a set of measures of the quality of electrical signals which are subject to the effects of noise, distortion and loss. As speeds increase to meet the demands of complex AI/ML applications, GDDR memory will continue to pose a range of signal integrity challenges for systems designers. Therefore, engineering the signal for GDDR requires careful simulations of the entire system at low bit error rates (BERs) including power, jitter and circuit equalizations.

From our perspective, it is important to work with an experienced GDDR engineering team that makes extensive use of modeling and simulation tools, and debug interfaces. Moreover, a GDDR engineering team should include package and PCB design experts and layout gurus, as well as signal integrity and power integrity specialists to ensure the implementation of a successful system design.



Leave a Reply


(Note: This name will be displayed publicly)