Review a proposal for a hybrid DIMM architecture that uses a hardware-managed DRAM in front of enhanced flash.
Authors:
Fred Ware,(1) Javier Bueno,(2) Liji Gopalakrishnan,(1) Brent Haukness,(1) Chris Haywood,(1) Toni Juan,(2) Eric Linstadt,(1) Sally A. McKee,(3) Steven C. Woo,(1) Kenneth L. Wright,(1) Craig Hampel,(1) Gary Bronner.(1)
(1) Rambus Inc. Sunnyvale, California
(2) Metempsy, Barcelona, Spain
(3) Clemson University, South Carolina
Rapidly evolving workloads and exploding data volumes place great pressure on data-center compute, IO, and memory performance, and especially on memory capacity. Increasing memory capacity requires a commensurate reduction in memory cost per bit. DRAM technology scaling has been steadily delivering affordable capacity increases, but DRAM scaling is rapidly reaching physical limits. Other technologies such as flash, enhanced flash, phase change memory, and spin torque transfer magnetic RAM hold promise for creating high capacity memories at lower cost per bit. However, these technologies have attributes that require careful management.
We propose a hybrid DIMM architecture that uses a hardware-managed DRAM in front of enhanced flash, which has much lower read latencies than conventional flash. We explore the design space of such SCM devices in the context of different technology parameters, evaluating performance and endurance for data-center workloads. Our hybrid memory architecture is commercially realizable and can use standard DIMM form factors, giving it a low barrier to market entry. We find that for workloads like media streaming, enhanced flash can be combined with DRAM to enable 88% of the performance of a DRAM-only system of the same capacity at 23% of the cost, even when factoring in replacement costs due to wear-out. The bottom line is that cost per performance is a factor of 3.8 better than DRAM.
Read more here.
An upbeat industry at the start of the year met one of its biggest challenges, but instead of being a headwind, it quickly turned into a tailwind.
The backbone of computing architecture for 75 years is being supplanted by more efficient, less general compute architectures.
The semiconductor industry will look and behave differently this year, and not just because of the pandemic.
More than $1.5B in funding for 26 startups; December was a big month for AI hardware.
SRC’s new CEO sheds some light on next-gen projects involving everything from chiplets to hyperdimensional computing and mixed reality.
Taiwan and Korea are in the lead, and China could follow.
An upbeat industry at the start of the year met one of its biggest challenges, but instead of being a headwind, it quickly turned into a tailwind.
New data suggests that more chips are being forced to respin due to analog issues.
New horizontal technologies and vertical markets are fueling the opportunities for massive innovation throughout an expanding ecosystem.
The backbone of computing architecture for 75 years is being supplanted by more efficient, less general compute architectures.
Rising costs, complexity, and fuzzy delivery schedules are casting a cloud over next-gen lithography.
Experts at the Table: The current state of open-source tools, and what the RISC-V landscape will look like by 2025.
Nvidia-Arm is just the beginning; more acquisitions are on the horizon.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
Leave a Reply