The rising number of cores on chips are responsible for significant increases in memory shipments.
I was recently reading several analyst reports that came out after the end of last quarter, and one caught my eye: “Gartner says Worldwide Server Shipments Grew 1.4%…” It caused me to wonder, how is it possible that server shipments only grow at modest rates, while the DRAM used in those servers is growing at significantly higher rates?
Putting my search engine to use, I found a series of analyst press releases and corresponding reports that provided some historical perspective. Over the last several years, server DRAM shipments (in bits) have grown at rates of 60% (2011), 52% (2012), and 45% (2013) (IHS DRAM Market Tracker). At the same time, server shipments have grown at rates of 8%, 1%, and 2% respectively (see figure below).
But what is causing this large disparity in growth?
At the application level, we all have seen reports about the rise of both big data analytics and server virtualization. Both applications correlate to higher utilization and the need for higher system performance. Next to the CPU, nothing keeps a system performing better than a large amount of fast DRAM allowing the CPU to continuously process massive amounts of data.
It would appear that these applications demand additional DRAM, causing the dramatic growth rates. It’s also worth digging inside the server itself to see what we find inside.
Server CPU vendors, such as Intel and AMD (x86), IBM (Power), and Oracle (Sparc) continue rolling out higher-performing versions of their processors. In 2010, Intel introduced the Nehalem CPU micro-architecture, featuring up to eight cores and three memory channels per CPU. Simply stated, the more cores you have, the more memory you need. The memory channels provide one way of adding more memory capacity, as well as adding more memory bandwidth (a topic for another day…).
Between 2010 and 2014, three additional generations of Intel server processors have been revealed, with the most recent (Ivy Bridge) featuring up to 15 cores and four memory channels. The table below shows the rapid memory capacity expansion that has occurred between the Nehalem and Ivy Bridge generations of processors. On a per CPU basis, the maximum memory has increased from 18GB to 96GB, a 52% CAGR. Interesting, but this maximum memory per CPU growth rate roughly equals the growth rate in total server memory bits shipped (previously shown in the first figure).
*Note: Assumes two ranks and three sockets per memory channel.
The net result of this increase is that each server today has the capability of addressing five times the amount of memory as was possible just a few years ago. And therefore, the server memory bits can grow disproportionately to the server unit growth rate. Furthermore, it is easy to infer that the memory subsystem power envelope has been increasing, and memory power management solutions are needed to help keep it under control.
Server shipments may grow at low rates, but applications will continue to demand more performance and memory capacity. The next time you see an article, report, or press release referring to low server growth rates, remember that at least where the memory is concerned, there is far more to the story than meets the eye. Will the number of CPU cores continue to rise? Will the CPUs contain more than four memory channels?
This is certainly an interesting space to watch, and we are excited to be a participant in this growing market.
[…] Loren Shalinsky zeroes in on a discrepancy between semiconductor and DRAM […]
[…] my last blog post, Server Memory: What Drives its Growth, I had a couple of people ask me, “If server memory has increased by so much in the last four […]