A new technical paper titled “Optimization and Benchmarking of Monolithically Stackable Gain Cell Memory for Last-Level Cache” was published by researchers at Georgia Institute of Technology and University of Virginia.
Abstract:
“The Last Level Cache (LLC) is the processor’s critical bridge between on-chip and off-chip memory levels – optimized for high density, high bandwidth, and low operation energy. To date, high-density (HD) SRAM has been the conventional device of choice; however, with the slowing of transistor scaling, as reflected in the industry’s almost identical HD SRAM cell size from 5 nm to 3 nm, alternative solutions such as 3D stacking with advanced packaging like hybrid bonding are pursued (as demonstrated in AMD’s V-cache). Escalating data demands necessitate ultra-large on-chip caches to decrease costly off-chip memory movement, pushing the exploration of device technology toward monolithic 3D (M3D) integration where transistors can be stacked in the back-end-of-line (BEOL) at the interconnect level. M3D integration requires fabrication techniques compatible with a low thermal budget (<400 degC). Among promising BEOL device candidates are amorphous oxide semiconductor (AOS) transistors, particularly desirable for their ultra-low leakage (<fA/um), enabling persistent data retention (>seconds) when used in a gain-cell configuration. This paper examines device, circuit, and system-level tradeoffs when optimizing BEOL-compatible AOS-based 2-transistor gain cell (2T-GC) for LLC. A cache early-exploration tool, NS-Cache, is developed to model caches in advanced 7 and 3 nm nodes and is integrated with the Gem5 simulator to systematically benchmark the impact of the newfound density/performance when compared to HD-SRAM, MRAM, and 1T1C eDRAM alternatives for LLC.”
Find the technical paper here. March 2025.
Waqar, Faaiq, Jungyoun Kwak, Junmo Lee, Minji Shon, Mohammadhosein Gholamrezaei, Kevin Skadron, and Shimeng Yu. “Optimization and Benchmarking of Monolithically Stackable Gain Cell Memory for Last-Level Cache.” arXiv preprint arXiv:2503.06304 (2025).
Leave a Reply