The Memory And Storage Hierarchy

A look at how all the pieces fit together and where there are still gaps.

popularity

The memory and storage hierarchy is a useful way of thinking about computer systems, and the dizzying array of memory options available to the system designer. Many different parameters characterize the memory solution. Among them are latency (how long the CPU needs to wait before the first data is available) and bandwidth (how fast additional data can be “streamed” after the first data point has arrived), although by my count there are more than 10 different parameters to measure.

No single memory sub-system is “best” in all categories, and therefore, most systems use a variety of memory solutions from different levels in the hierarchy to achieve the desired results. High-end systems, like servers used in datacenters, are most likely to use solutions from every level in the hierarchy.

The simplest way to distinguish between memory and storage is to determine how they interact with the CPU. With low latency sub-systems, the CPU will wait for the data before moving to the next task. With high latency sub-systems, the CPU will move on to other tasks, and then go back for the data once it is finally available. A handy heuristic is to use 1 microsecond as the dividing line between memory and storage, as shown by the solid line on the memory hierarchy chart below.

On-chip memories, like registers and cache, sit at the top of the pyramid and have the lowest latency on the order of nanoseconds. The bottom of the pyramid includes storage systems, like local hard drives or even slower systems like storage in the cloud or tape backup systems. This particular chart is drawn to a relative scale, where moving from one level to another involves a 10X change in latency. Between DIMMs (based on DRAM) and Solid State Drives (based on NAND flash memory), there is not only the 1us memory barrier, but also two orders of magnitude of separation, leaving much room for new solutions to fill the void.

Screen Shot 2015-08-12 at 5.23.15 PM

While not changing the relative placement on the chart, improvements to memory systems are occurring on a regular basis. DIMM subsystem improvements are perhaps the easiest to imagine. DRAM latency has not changed much over the years, but DRAM data rates have continued to rise with the industry searching for solutions that can allow for both more memory and bandwidth.

While not highlighted on the chart, smaller hierarchy gaps exist. New memory technologies like HBM or HMC can be sandwiched in-between DIMMs and on-chip memories, and have the ability to put gigabytes of data even closer to the CPU than a DIMM, providing a real opportunity to become a “level 4 cache” solution.

Going back 5-10 years, Solid State Drives (SSDs) started to fill the huge gap that originally existed between DIMMs and hard drives. But the underlying NAND technology performance has somewhat leveled off (but made extraordinary progress in price reduction), and has therefore left the door open for additional technologies to fill the remaining gaps.

Just last week, Intel and Micron made a big splash with their 3D XPoint memory announcement. While technical details are scarce, we can piece together enough data points to surmise that 3D XPoint could fill one of the two blank levels currently in between SSDs and DIMMS. Even with the addition of 3D XPoint, many gaps will continue to exist in the memory hierarchy, leaving no shortage of research avenues for companies in the memory industry.

In future posts, we will take a look at various memory subsystems within the hierarchy, and provide additional focus on some of the other distinguishing characteristics.



Leave a Reply


(Note: This name will be displayed publicly)