Rethinking Main Memory

New approaches to reducing cost and power.

popularity

With newer, bigger programs and more apps multitasking simultaneously, the answer to making any system run faster, from handheld to super computer, was always just to add more DRAM. . . and more, and more and more.

From data centers to wearables, that model no longer works. By offloading the storage of programs to less expensive solid-state drives (SSDs) and only using a small amount of expensive DRAM to cache the active processes, the amount of DRAM can be cut dramatically along with power consumption. That, in turn, can spark the development of a new class of lower-cost and lower-power products and cut the DRAM needed in a system by as much as 10X, while cutting the energy consumed to about half what is used today.

How? If you look at what’s actually being used on your computer or portable device, only a small percentage of the application code loaded into the main memory is usually active. Take a look for yourself. For example, look at the process monitor in Windows task manager. You will see that most of the applications are idle, yet in aggregate require a significant amount of DRAM space. In fact, nearly 90% to 99% of most DRAMs are idle.

The solution is to put frequently used programs on a small amount of DRAM and the rest onto less-expensive, energy-efficient flash memory that can reduce the size, cost and power requirements of anything from a smart car, to mobile phones, tablets, smart watches, computers or data centers.

And it works! Our engineering mobile phone platform, shown in this video, uses Marvell’s Final-Level Cache to play back video smoothly, without any lag or stutter. App switching is also zippy. Such performance is common on mid- or even high-end phones with large DRAM, yet this design used only 768MB of DRAM in the implementation. Time-sensitive streams are mapped to a 512MB non-cacheable area, while another 256 DRAM is used in FLC to emulate 1GB of memory. Hence, it provides similar performance to 1.5GB of main memory in traditional DRAM-based designs. FLC can also emulate larger quantities of main memory. The second proof of concept demonstrates smooth video performance on the same mobile phone platform using FLC.

Screen Shot 2016-05-10 at 8.20.07 AM
Final-Level Cache Main Memory

Better performance can be achieved by reporting to the operating system a larger than physically implemented main memory. The operating system is thus less likely to kill background apps, which is why the fast app switching is possible. The hardware does all the heavy lifting in the background and frees up the tasks of the operating system.

Smaller DRAM means less power and lower cost creating endless new possibilities. It will free up innovation in digital design and even accelerate categories like the Internet of Things (IoT.) By offloading storage to cheaper SSD memory, the world can also dramatically reduce overall power consumption and energy needs. For example, a 50% reduction in all computer energy use equates to a 2% decrease in rural power needs.
Battery life of laptops will be increased, as well as IoT devices that could last weeks before recharging. It changes the future of supercomputers as well. With FLC, you can replace an entire server room with a small portable device. Super computing and computational sets will become faster and more efficient, with future super computers containing tens of terabytes of memory instead of hundreds of gigabytes. Data center space savings and cooling costs will also be dramatically reduced.



  • realjjj

    How about gaming in mobile? Must be the most difficult scenario for this solution.
    Marketing it this way might not be very appealing for consumers.. In 50$ phones it’s fine but in mid range advertising it as “virtual DRAM” that allows you to increase the amount of DRAM not reduce it might be a much better selling point. Paired with a less aggressive offloading to make sure there is virtually no perf penalty and/or faster than average NAND while still saving some power would increase its value further. Users do like a lot of DRAM and it is a selling point so it would be easier for phone makers to have the same amount of DRAM as the competitor but be able to expand it while also saving some power. 2GB of RAM and X GB of virtual RAM in a 100$ phone sound much better than just 2GB of RAM.
    Xiaomi even has a feature that allows users to “lock” apps and always keep them in the DRAM. It’s convenient for users since they aren’t aware of the downsides but offloading most of those to NAND should be a plus.

    You don’t mention wearables but anything that saves power is a huge plus there and some perf loss is easier to accept.

  • Kev

    The problem with SSD/Flash memory is that it wears out. So, if you want it to last, you use the DRAM for cache and only write back when you have to. Flash is also slower than DRAM, so I’m not buying this as a rethink of anything, it’s just plain old virtual memory swapping (with more levels).

  • John

    Look for new technology and maybe no dram required at all: