Boosting Data Center Memory Performance In The Zettabyte Era With HBM3

AI and ML are stretching current data center memory infrastructures to their limit.

popularity

We are living in the Zettabyte era, a term first coined by Cisco. Most of the world’s data has been created over the past few years and it is not set to slow down any time soon. Data has become not just big, but enormous! In fact, according to the IDC Global Datasphere 2022-2026 Forecast, the amount of data generated over the next 5 years will be at least 2x the amount of data generated over the past 10 years. This means that the data created by 2026 could reach 221.2 Zettabytes (ZB). With 1 Zettabyte equaling 1 trillion gigabytes (GB), this is an awful lot of zeros…

Artificial intelligence (AI) and machine learning (ML) applications are key drivers of this massive growth of data. They not only consume raw data, but they also create more data after they process it. Their workloads require a huge amount of memory bandwidth, and they are stretching current data center memory infrastructures to their limit. One way to solve this problem, of course, is to add more data centers. In 2021, a hundred new hyperscale data centers were brought online, bringing the total to 700 worldwide. 300+ more are in the pipeline, which will bring the worldwide total to 1000 by 2024 (Source: Synergy Research Group, March 2022). While this is one solution, it does not solve the issue of how to process massive amounts of data efficiently within these data centers.

For AI/ML applications that require tremendous memory bandwidth, vast memory capacity, and extremely fast processing, High Bandwidth Memory (HBM) provides an extremely valuable technology solution. HBM3, released by JEDEC in January 2022, is the latest iteration of the High Bandwidth Memory standard that increases the per-pin data rate to 6.4 Gigabits per second (Gb/s), double that of HBM2. HBM3 maintains the 1024-bit wide interface of previous generations—while extending the track record of bandwidth performance set by what was originally dubbed the “slow and wide” HBM memory architecture. Since bandwidth is the product of data rate and interface width, 6.4 Gb/s x 1024 enables 6,554 Gb/s. Dividing by 8 bits/byte yields a total bandwidth of 819 Gigabytes per second (GB/s).

HBM3 also supports 3D DRAM devices of up to 12-high stacks—with provision for a future extension to as high as 16 devices per stack—for device densities of up to 32Gb. In real-world terms, a 12-high stack of 32Gb devices translates to a single HBM3 DRAM device of 48GB capacity. Moreover, HBM3 doubles the number of memory channels to 16 and supports 32 virtual channels (with two pseudo-channels per channel). With more memory channels, HBM3 can support higher stacks of DRAM per device and finer access granularity.

Rambus has an extensive track record delivering industry-leading high-performance memory IP solutions. Providing headroom for design margin and future scalability, the Rambus HBM3 Memory Subsystem offers 8.4 Gbps performance delivering over a terabyte per second of bandwidth between processor and HBM3 memory device. At this data rate, an accelerator with 8 attached HBM3 memory devices could achieve 8.6 TB/s of memory bandwidth. The Rambus HBM3 Memory Subsystem, comprised of integrated PHY and Memory Controller, is offered with system-level design support including interposer and package reference design to facilitate customers with first-time right implementations of their HBM3 designs.

Additional resources:



Leave a Reply


(Note: This name will be displayed publicly)