Which Memory Type Should You Use?

Why bandwidth is so important in choosing memory.

popularity

I continue to get besieged by statements in which memory “latency” and “bandwidth” get misused. As I mentioned in my last blog, latency is defined as how long the CPU needs to wait before the first data is available, while bandwidth is how fast additional data can be “streamed” after the first data point has arrived. Bandwidth becomes a bigger factor in performance when data is stored in “chunks” rather than being randomly distributed.

As an example, programming code tends to be random, as the code needs to respond to the specific input conditions — does anyone remember “goto” and “if-then-else” statements? Large files, where perhaps megabytes or more of sequential data needs to be stored would represent the other end of the spectrum.

Modern computer systems adhere to the advanced format 4K sector size and therefore large files are broken up into easier-to-manage chunks of 4096 bytes. The concept of a sector size is a holdover from the original hard disk drives, and even solid state drives (SSDs) adhere to the implementation to maintain compatibility with the computer file system.

With these basic concepts, we now can compare expected bandwidth with the bandwidth specified by the manufacturer for common and up-and-coming memory solutions. For each of these examples, I assume that the first access is to a random storage location and, therefore, the latency must be accounted for. Note that when accounting for latency, the calculated bandwidth often pales in comparison to the bandwidth specified in a product brief.

Screen Shot 2015-09-09 at 6.17.00 PM
Notes and sources: Rambus Analysis.
1. DDR4 @ 2400 Mbps
2. Intel SSD DC P3700 Series
3. Intel SSD DC S3710 Series
4. Cheetah 15K.5 SAS Hard Drive

The chart is quite revealing and it helps to highlight that it is critical to understand the use cases for your applications in order to determine what type of memory should be used.

Assume your server is running a database application with relatively small record sizes of 1Kbyte in size. We can also assume that the database records are rarely accessed sequentially, which means that the latency dominates the performance. SSDs provide a significant improvement over hard drives. However, their performance is still three orders of magnitude smaller than any DRAM-based memory systems.

Over the years, SSDs have continued to move closer to the CPU, reducing their latency along the way. SSDs adhering to the NVMe standard aim to lower latencies, but this standard does nothing to effect the NAND device inside the SSD, which inherently has a latency in the tens to hundreds of microseconds. Even the greater than 50% latency reduction touted by NVMe doesn’t mean that you can jump the memory gap.

For a database where the record size gets larger, say 8 Kbytes in size, the calculated bandwidth does improve markedly as the system can now take better advantage of the max bandwidth and spread the “cost” of the latency over more bytes.

By being very strategic in the placement of the data (e.g. for record sizes that are in the megabyte range), all of these systems have the capability of continuously streaming the data, and then bandwidths begin to approach the specified max bandwidth.

The conclusion is clear: If you need a memory that maximizes performance for random operations, stick with DRAM-based memory systems. If you need memory for large records, consider what your budget allows and how much memory capacity and bandwidth you really need. Then you can make an informed decision.



1 comments

Michael Sporer says:

How do you define your latency numbers for DRAM? True random access implies page miss. Your numbers seem to assume a page hit.

Leave a Reply


(Note: This name will be displayed publicly)