Server Memory: Should We Be Concerned About The Power?

How can you get a 40% reduction in power and 5X capacity increase? Read on…

popularity

After my last blog post, Server Memory: What Drives its Growth, I had a couple of people ask me, “If server memory has increased by so much in the last four years, what effect has that had on the server memory subsystem power consumption?”

It’s a good question. In last month’s blog, I calculated that the maximum memory per CPU has increased from 18GB (2010, highest-end Nehalem 45nm CPU) to 96GB (2014, highest-end Ivy Bridge-EX 22nm CPU), a 5X increase in memory capacity. So the obvious question becomes, has the memory subsystem power consumption also gone up by 5X? Or have the memory manufacturers and standards committees again worked their magic to ensure that a nuclear reactor is not needed to provide the necessary power for data centers?

As you may have guessed, the memory committees have come through – not only are you now able to get 5X the capacity as compared to a few years ago, but the memory subsystem power consumption has actually come down by 40%! And without the memory subsystem zapping all of the power, the industry has a new opportunity to provide even more CPUs and more memory for the data center applications that demand it.

After speaking with some of the gurus in Rambus Labs, I was able to identify some of the underlying reasons for the 40% power savings.

Screen Shot 2014-08-07 at 9.22.42 AM

Notes: 1) Assumes 3 sockets per memory channel and 2 ranks per DIMM 2) Based on IDD7, as found in publicly available Micron datasheets

DRAM Core Voltage: Increasing the number of memory channels directly increases available bandwidth and power consumption, and in this case, causing a 33% rise for both. While there is no change in the signaling when transitioning from DDR3 to DDR3L, the DRAM core voltage drops from 1.5 Volts to 1.35 Volts (a 10% voltage reduction).

Process Shrink: Process node transitions are a primary driver of memory power and performance. With process node shrinks, the transistor core voltage typically decreases (the transistor core voltage is not something that is typically stated in a DRAM datasheet). But the process shrink also allows the manufacturer to increase the number of bits on the die, which causes the power consumption to increase. Given these conflicting trends, we need to ask the question: over this four-year period, did the decrease in power consumption offset the increase in power consumption due to the higher bit density?

As shown in the table above, 1Gb DDR3 memory device in 2010 had active current of 490 milliamps. More recently, the 4Gb DDR3L memory device had active current of 220 milliamps. This amounts to a 55% decrease in power consumption, even though the newer device is 20% faster (which by itself should have raised power consumption by 20%) and provides four times the memory. The decrease in power consumption due to the process shrink “wins”!

But that was power savings at the die level. Now let’s roll up the power savings at memory subsystem level. The 55% power savings at the die level is greater than the 33% increase due to the rise in number of memory channels, resulting in an overall 40% savings in power consumption.

We all know that memory consumption is unlikely to go down in the coming years. I’ve now shown that the memory manufacturers have come through on providing lower power consumption…but what happens in the future?

DDR4 memory components are just now starting to make their way into the market, although server systems that support DDR4 are not yet widely available. What effect will DDR4 have on the memory subsystem that it gets implemented in? You’ll have to stay tuned for future blog entries to find out.



Leave a Reply


(Note: This name will be displayed publicly)