Will There Be A DDR5?

As DDR4 adoption begins, there are a number of options proposed for the next generation of off-chip memory.

popularity

DDR4 rollouts have begun. And in the DRAM world that begs the question, ‘What comes next?’

The answer isn’t so obvious. While there have been suggestions inside of JEDEC — the Joint Electron Device Engineering Council, which has overseen the standards for double-data-rate synchronous DRAM — to develop a DDR5 standard, it’s not the only solution being considered. And in the minds of some experts, it’s far from the best solution.

What has made DRAM so attractive for years is largely the ability to manufacture it with consistency, allowing memory manufacturers to squeeze every last fraction of a penny out of the process while letting memory makers steadily increase the peak transfer rate. For DDR3, the maximum rate is 2133 million transfers per second, while for DDR4 it is 3200. Achieving those numbers is unrealistic, however. Most DRAM doesn’t run anywhere close to the peak data rate, largely because of contention over memory and bandwidth resources and increasing RC delay. And so far, only the most advanced servers are using DDR4.

“It took 10 years to get DDR4 out of JEDEC,” said Graham Allan, product marketing manager for DDR at Synopsys. “I belong to the camp that says there will not be a DDR5.”

There are good reasons to draw that conclusion. To begin with, DDR4 is harder to work with than DDR3. The big advantages are lower power and higher data transfer speed. DDR4 runs at 1.2 volts compared to 1.65 volts for DDR3, and there is a sleep option to power down the DRAM. But that comes at a price. Latency is higher, there are questions about signal integrity as wires continually get longer—not just with DRAM, but with any off-chip memory using current approaches—and as with all new technology there are still kinks to work out. One such hiccup involves the “row hammer” problem, where a row of memory cells can corrupt data if it’s accessed too often. That, in turn, can limit access to neighboring rows. There are workarounds now, but it was an unexpected development for a well-proven technology.

Perhaps the biggest reason for questioning DDR5, though, is that there are alternatives available now that were not being seriously considered in 2005, when JEDEC began working on DDR4.

Stacked die options
At the top of the list of alternatives are high-bandwidth memory (HBM) and the hybrid memory cube (HMC). Both of those technologies are more expensive at the moment than DRAM, but they do eliminate bandwidth and latency issues. Wires are shorter and Wide I/O-2 provides a bigger channel for data transfer. That in turn can improve overall system efficiency because it takes less energy to drive signals back and forth.

“Both HBM and HMC are true departures from the norm,” said Scott Jacobson, senior product marketing manager at Cadence. “There’s a feeling that if you need a technology change, at least make it worthwhile. With 2.5D and 3D, you get big improvements in power, footprint and signal integrity because you have shorter lines. HBM is getting a lot of customer traction right now. A year ago, it looked like the solution of choice would be HMC in high-performance computing and high-speed networking. HBM is an outgrowth of graphics. But it has evolved dramatically since then. We’re seeing a lot more customer pickup in general purpose high performance.”

One of the reasons for interest in HBM involves physical effects. “As bandwidths increase they generate a lot of noise,” said Aveek Sarkar, vice president of product engineering and support at Ansys-Apache. “The communication between the CPU and memory requires an impedance match. But if you have three clocks and 64-bit switching, the voltage drops can be very different. Power integrity degrades performance and with higher bandwidth it increases jitter, so you have to write the jtter specs tighter and tighter.”

Follow-on technology
That doesn’t mean there isn’t plenty of life left in DDR4, though. In fact, it’s very likely that DDR4 will be around for a very long time.

Frank Ferro, senior director of product management at Rambus, said one of the big advantages of DRAM is compatibility, which makes it harder to replace. But there are some things that can be done to improve the performance, including using low-swing, point-to-point signaling between the processor and DRAM to minimize loss, rather than relying on a multi-drop topology. That also is more efficient, requiring less power to drive the signals, and it can be simpler to implement because timing and equalization can be built into the PHY.

“What we’re talking about is a conventional approach to DRAM but up to double the performance,” Ferro said.

And there could well be other approaches that are not so compatible.

“I do believe there is likely to be a new generation of DDR4 technology that may not be backward-compatible with DDR4, but the primary motivator there is lower power,” said Synopsys’ Allan. “The longer the line, the more you need termination, which consumes current.”

Whether that should be called DDR5, or DDR4.x is up to the consortium of memory companies, which are just now beginning to seriously consider the next steps. One big consideration is what happens with increasing density at 1x nanometer, where new materials will be required to maintain electron mobility and quantum effects begin to take their toll on the movement of those electrons in and out of memory. The introduction of III-V materials into the manufacturing process is brand new, and for a segment of the semiconductor industry that is used to tweaking evolutionary changes, there are suddenly a lot more considerations on the table than in the past.



  • FuturePlus

    Hello Ed, Thanks for mentioning Row Hammer. Its seeing a good bit of discussion these days. So much so we built a tool to detect the excessive ACTIVATE commands to a single row that triggers the corruption. We have an entire page dedicated to it on our web site. http://www.ddrdetective.com/row-hammer/