DDR5 Memory Enables Next-Generation Computing

Key changes in the latest generation of memory and some new design challenges.

popularity

Computing main memory transitions may only happen once a decade, but when they do, it is a very exciting time in the industry. When JEDEC announced the publication of the JESD79-5 DDR5 SDRAM standard in 2021, it signaled the beginning of the transition to DDR5 server and client dual-inline memory modules (Server RDIMMs, Client UDIMMs and SODIMMs). We are now firmly on this path of enabling the next generation of servers with DDR5 memory.

So, what exactly are some of the key differences between DDR5 memory and the previous generation DDR4? There is always a want for higher memory performance, and DDR5 is the latest answer to the industry’s insatiable need for more bandwidth and capacity. While DDR4 DIMMs top out at 3.2 gigatransfers per second (GT/s) at a clock rate of 1.6 gigahertz (GHz), initial DDR5 DIMMs deliver a 50% bandwidth increase to 4.8 GT/s. And DDR5 is set to scale ultimately to a data rate of 8.4 GT/s.

Another big change is the reduction in operating voltage (VDD). This is designed to help offset the power increase that comes with running at higher speed. With DDR5, the registering clock driver (RCD) voltage drops from 1.2 V down to 1.1 V. In addition, Command/Address (CA) signaling is changed from SSTL to PODL. This comes with the advantage of burning no static power when the pins are parked in the high state.

Thirdly, there is a major change in power architecture. With DDR5 RDIMMs, power management moves from the motherboard to the memory module. DDR5 RDIMMs will have a 12-V power management IC (PMIC) on DIMM removing the need to overprovision on-board voltage regulators for the maximum load case and decreasing IR drop.

As we can see, DDR5 brings with it several major performance advantages. However, with these advantages come some new design challenges that system architects and designers need to be aware of to get the most from this new generation of memory.

For DDR4 designs, the primary signal integrity challenges were on the dual-data-rate DQ bus, with less attention paid to the lower-speed command address (CA) bus. For DDR5 designs, even the CA bus will require special attention for signal integrity. In DDR4, there was consideration for using differential feedback equalization (DFE) to improve the DQ data channel. But for DDR5, the RCD’s CA bus receivers will also require DFE options to ensure good signal integrity.

The power delivery network (PDN) on the motherboard is another consideration, including up to the DIMM with the PMIC. Considering the higher clock and data rates, designers will need to make sure that the PDN can handle the load of running at higher speed with good signal integrity.

The DIMM connectors, now surface mount, from the motherboard to the DIMM will also have to handle the new clock and data rates. For system designers more emphasis must be placed on system design for electromagnetic interference and compatibility (EMI and EMC).

The Rambus DDR5 DIMM memory interface chipset, consisting of DDR5 Registering Clock Drivers (RCD), Serial Presence Detect Hubs (SPD Hub) and Temperature Sensors (TS), is tailored to enable designers to harness the full advantages of DDD5, all while handling the signal integrity and power challenges of higher data and clock speeds. Rambus was the first in the industry to deliver a DDR5 RCD to 5600 MT/s and is continually advancing the performance of its DDR5 solutions to meet growing market. Rambus has now pushed DDR5 RCD performance to 6400 MT/s with our latest generation device.

Resources



Leave a Reply


(Note: This name will be displayed publicly)