The memory developed for high-end graphics cards isn’t just for gaming anymore.
The origins of graphics double data rate (GDDR) memory can be traced to the rise of 3D gaming on PCs and consoles. The first GPUs used single data rate (SDR) and double data rate (DDR) DRAM, the same memory used for CPU main memory. The quest for higher frame rates at higher resolutions drove the need for a graphics-workload specific memory solution. The commercial success of gaming PCs and consoles shipping in the tens of millions of units annually made an application-specific graphics memory viable.
First introduced in 1998, GDDR was the high-end graphics card memory with DDR doing duty for the mid-range and entry-level cards. This was the case for nearly two decades, with GDDR taking greater share with each passing generation. It wasn’t until 2017-2018 that new families of GPUs used GDDR memory across the entire product line up. Today, GDDR6 is the state-of-the-art graphics memory solution with performance demonstrated to 18 gigabits per second (Gbps) and per device bandwidth of 72 gigabytes per second (GB/s). At this level of performance, a high-end graphics card with 10-12 GDDR6 devices is pushing toward a terabyte per second of memory bandwidth.
GDDR’s commercial success in gaming and graphics has made available a memory solution with a great combination of attributes: high throughput, low latency and high capacity. The volume demand from 3D gaming enables GDDR6 DRAM to be offered at an attractive price-performance point. Leveraging standard manufacturing and packaging processes, GDDR6 has the pedigree for robustness making it suitable for applications demanding very high levels of reliability.
With the demand for higher bandwidth and throughput motivated by an exponential rise in data traffic, networking is an obvious new market for GDDR6. Micron has even introduced a GDDR6N product specifically for networking hardware. 5G adds to the flood of data, not only by offering a 10-fold increase in bandwidth for mobile users, but by increasing connection density by 10X and capacity by 100X. This will enable an untethering of AI-powered IoT devices from wired or short-reach WiFi connections. Machine-to-machine communications will accelerate the already rapidly rising data traffic, and GDDR6 will serve in the emerging network devices designed to handle this demand.
Speaking of AI, like 3D gaming, AI/ML has an insatiable hunger for faster throughput and more bandwidth. AI/ML training capabilities, measured in the size of the training data sets and the complexity of the neural networks, has been rising at a 10X annual clip for nearly a decade. The output of that training is increasingly sophisticated inference models. These are being deployed at the network edge and increasingly in the end devices themselves. AI-powered devices will be delivering packages, protecting our neighborhoods and guiding the operations of our cities all in real time. The performance of GDDR6 coupled with its track record of high-volume success makes it the ideal choice for deployment in these millions of AI-powered IoT devices.
There are few AI/ML inferencing applications as demanding as autonomous vehicles. With advanced driver assistance systems (ADAS), reliability is paramount given its responsibility for human safety and comfort. Today’s level 2 systems (partial driving automation), which can perform steering and acceleration under the driver’s supervision, require 60-80 GB/s of bandwidth. That requirement can be tackled by a number of memory options including DDR4 and LPDDR5. Bandwidth needs rise rapidly as we progress to level 3 and eventually toward full autonomy at level 5. At this level, ADAS systems will need 500 to perhaps as much as 750 GB/s of bandwidth. GDDR6 is the only solution that can deliver the bandwidth, be implemented in a practical and cost-effective fashion, and meet the rigorous reliability requirements of automotive systems.
Born with the big bang of 3D gaming, the universe of applications for GDDR6 memory is expanding rapidly. Common across graphics, networking, AI/ML and ADAS is the demand for higher throughput and greater bandwidth. Cost effectiveness and reliability are key in these applications. GDDR6 answers the call with a great combination of price-performance, ruggedness and a production track record of many millions of devices shipped.
Additional resources:
Webinar: GDDR6 and HBM2E Memory Solutions for AI
White Paper: GDDR6 and HBM2E: Memory Solutions for AI
Web: Rambus GDDR6 PHY and Rambus GDDR6 Controller
Product Briefs: GDDR6 PHY and GDDR6 Controller
Solution Brief: Rambus GDDR6 Interface
Leave a Reply