A 50% increase in data rate is on the horizon for server memory.
Looking forward to 2022, the first of the DDR5-based servers will hit the market with RDIMMs running at 4800 megatransfers per second (MT/s). This is a 50% increase in data rate over top-end 3200 MT/s DDR4 RDIMMs in current high-performance servers. DDR5 memory incorporates a number of innovations, such as Decision Feedback Equalization (DFE), and a new DIMM architecture which enable that speed grade jump and support future scaling.
One major DDR5 change compared to DDR4 is a reduction in operating voltage (VDD), and that will support delivering higher performance while maintaining the power envelope. With DDR5, the voltage drops from 1.2 V down to 1.1 V. In addition, Command/Address (CA) signaling is changed from SSTL to PODL, which has the advantage of burning no static power when the pins are parked in the high state.
DDR5 also incorporates a new power architecture. With DDR5 RDIMMs, power management moves from the motherboard to the RDIMM itself. DDR5 RDIMMs will have a 12-V power management IC (PMIC) on RDIMM allowing for better granularity of system power loading. The PMIC distributes the 1.1 V VDD supply, helping with signal integrity and noise with better on-DIMM control of the power supply.
Another major change with DDR5 is a new RDIMM channel architecture. DDR4 RDIMMs have a 72-bit bus, comprised of 64 data bits plus eight ECC bits. With DDR5, each RDIMM will have two channels. Each of these channels will be 40-bits wide: 32 data bits with eight ECC bits. While the data width is the same (64-bits total), having two smaller independent channels improves memory access efficiency. So not only do you get the benefit of the speed bump with DDR5, the benefit of that higher MT/s is amplified by greater efficiency.
DDR5 also supports higher capacity DRAM devices. With DDR5 buffer chip DIMMs, the server or system designer will ultimately be able to use densities of up to 64 Gb DRAMs in a single-die package. DDR4 maxes out at 16 Gb DRAM in a single-die package (SDP). DDR5 supports features like on-die ECC, error transparency mode, post-package repair, and read and write CRC modes to support higher-capacity DRAMs. The impact of higher capacity devices obviously translates to higher capacity RDIMMs. So, while DDR4 RDIMMs can have capacities of up to 64 GB (using SDP), DDR5 SDP-based RDIMMs quadruple that to 256 GB in the future.
While we’re still on the cusp of the introduction of the first DDR5-based servers, the demand for greater bandwidth is insatiable. Advanced workloads running in the data center want all the bandwidth they can get and then some. So work is already well underway to support the next higher speed grade.
Rambus has announced we are sampling 5600 MT/s DDR5 Registering Clock Driver (RCD) for the next wave of server main memory. The RCD is the conductor for the orchestra of DDR5 DRAM devices on the RDIMM. It provides the control plane Command and Address signals as well as the clock for synchronous operation. For a double data rate speed of 5600 MT/s, the RCD must provide a 2800 MHz clock.
As data and clock speeds increase, so too do the signal and power integrity (SI/PI) challenges. This gets even tougher given that DDR5 lowers the operating voltage to 1.1V to maintain the power envelope of the RDIMM. A lower voltage means lower design margin to achieve error-free operation.
Fortunately, Rambus has over 30 years of high-performance memory experience and is renowned for SI/PI expertise (we literally wrote the book). We leveraged this technology heritage to deliver another in a long line of industry firsts with this RCD product milestone. 5600 MT/s represents a further big jump in memory bandwidth, and with demands what they are, expect many more to come.
Additional resources:
Leave a Reply