HBM Roadmap: Next-Gen High-Bandwidth Memory Architectures (KAIST’s TERALAB)


A new technical paper titled "HBM Roadmap Ver 1.7 Workshop" was published by researchers at KAIST’s TERALAB. The 371-page paper provides an overview of next-generation HBM architectures based on current technology trends, as well as many technology insights. Find the technical paper here or here.  Published June 2025. Advising Professor : Prof. Joungho Kim. Fig. 1: Thermal Manag... » read more

The Best DRAMs For Artificial Intelligence


Artificial intelligence (AI) involves intense computing and tons of data. The computing may be performed by CPUs, GPUs, or dedicated accelerators, and while the data travels through DRAM on its way to the processor, the best DRAM type for this purpose depends on the type of system that is performing the training or inference. The memory challenge facing engineering teams today is how to keep... » read more

HBM4 Elevates AI Training Performance To New Heights


Generative and Agentic AI are pushing an extremely rapid evolution of computing technology. With leading-edge LLMs now in excess of a trillion parameters, training takes an enormous amount of computing capacity, and state-of-the-art training clusters can employ more than 100,000 GPUs. High Bandwidth Memory (HBM) provides the vast memory bandwidth and capacity needed for these demanding AI train... » read more

Speeding Down Memory Lane With Custom HBM


With the goal of increasing system performance per watt, the semiconductor industry is always seeking innovative solutions that go beyond the usual approaches of increasing memory capacity and data rates. Over the last decade, the High Bandwidth Memory (HBM) protocol has proven to be a popular choice for data center and high-performance computing (HPC) applications. Even more benefit can be rea... » read more

AI’s Rapid Growth: The Crucial Role Of High Bandwidth Memory


System efficiency is dictated by the performance of crucial components. For AI hardware systems, memory subsystem performance is the single most crucial component. In this blog post, we will provide an overview of the AI model landscape and the impact of HBM memory subsystems on effective system performance. AI models have grown from a few billions of parameters from the early '90s to today�... » read more

Low-Cost TSV Repair Architecture Specialized for Highly Clustered TSV Faults Within HBM


A new technical paper titled "Low Cost TSV Repair Architecture Using Switch-Based Matrix for Highly Clustered Faults" was published by researchers at Yonsei University. Abstract "Through-silicon via (TSV), responsible for inter-layer communication in high-bandwidth memory (HBM), plays a critical role in HBM operation. Therefore, faults occur in TSVs can critically impact the entire chips. H... » read more

Choosing The Right Memory Solution For AI Accelerators


To meet the increasing demands of AI workloads, memory solutions must deliver ever-increasing performance in bandwidth, capacity, and efficiency. From the training of massive large language models (LLMs) to efficient inference on endpoint devices, choosing the right memory technology is critical for chip designers. This blog explores three leading memory solutions—HBM, LPDDR, and GDDR—and t... » read more

Redefining XPU Memory For AI Data Centers Through Custom HBM4: Part 3


This is the third and final of a series from Alphawave Semi on HBM4 and gives and examines custom HBM implementations. Click here for part 1, which gives an overview of the HBM standard, and here for part 2, on HBM implementation challenges. This follows on from our second blog, where we discussed the substantial improvements high bandwidth memory (HBM) provides over traditional memory tec... » read more

Redefining XPU Memory For AI Data Centers Through Custom HBM4: Part 2


This is the second in a three-part series from Alphawave Semi on HBM4 and gives insights into HBM implementation challenges. Click here for part 1, for an overview on HBM, and in part 3, we will introduce details of a custom HBM implementation. Implementing a 2.5D System-in-Package (SiP) with High Bandwidth Memory (HBM) is a complex process that spans across architecture definition, designi... » read more

Redefining XPU Memory For AI Data Centers Through Custom HBM4: Part 1


This is the first of a three-part series on HBM4 and gives an overview of the HBM standard. Part 2 will provide insights on HBM implementation challenges, and part 3 will introduce the concept of a custom HBM implementation. Relentless growth in data consumption Recent advances in deep learning have had a transformative effect on artificial intelligence (AI) and the ever-increasing volume of ... » read more

← Older posts