PF-DRAM: A Precharge-Free DRAM Structure

Memory power consumption reduction (up to 54.2%) achieved by the system using PF-DRAM


Authors: Nezam Rohbani † (IPM); Sina Darabii § (Sharif); Hamid Sarbazi-Azad † i §(Sharif / IPM):

† School of Computer Science, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran § Department of Computer Engineering, Sharif University of Technology, Tehran, Iran


“Although DRAM capacity and bandwidth have increased sharply by the advances in technology and standards, its latency and energy per access have remained almost constant in recent generations. The main portion of DRAM power/energy is dissipated by Read, Write, and Refresh operations, all initiated by a Precharge phase. Precharge phase not only imposes a large amount of energy consumption, but also increases the delay of closing a row in a memory block to open another one. By reduction of row-hit rate in recent workloads, especially in multi-core systems, precharge rate increases which exacerbates DRAM power dissipation and access latency. This work proposes a novel DRAM structure, called Precharge-Free DRAM (PFDRAM), that eliminates the Precharge phase of DRAM. PFDRAM uses the charge on bitlines from the previous Activation phase, as the starting point for the next Activation. The difference between PF-DRAM and conventional DRAM structure is limited to precharge and equalizer circuitry and simple modifications in sense amplifier, which are all limited to subarray level. PF-DRAM is compatible with the mainstream JEDEC memory standards like DDRx and HBM, with minimum modifications in memory controller. Furthermore, almost all of the previously proposed power/energy reduction techniques in DRAM are still applicable to PF-DRAM for further improvement. Our experimental results on a 8 GB memory system running SPEC CPU2017 and PARSEC 2.1 workloads show an average of 35.3% memory power consumption reduction (up to 54.2%) achieved by the system using PF-DRAM with respect to the system using conventional DRAM. Moreover, the overall performance is improved by 8.6%, in average (up to 24.3%). According to our analysis, all such improvements are achieved for less than 9% area overhead.”

Find technical paper here.

Technical paper presented at 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture.

Leave a Reply

(Note: This name will be displayed publicly)