Physics Limits Interposer Line Lengths


Electrical interposers provide a convenient surface for mounting multiple chips within a single package, but even though interposer lines theoretically can be routed anywhere, insertion losses limit their practical length. Lines on interposers — and on silicon interposers in particular — can be exceedingly narrow. Having a small cross-section makes such lines resistive, degrading signals... » read more

HBM Roadmap: Next-Gen High-Bandwidth Memory Architectures (KAIST’s TERALAB)


A new technical paper titled "HBM Roadmap Ver 1.7 Workshop" was published by researchers at KAIST’s TERALAB. The 371-page paper provides an overview of next-generation HBM architectures based on current technology trends, as well as many technology insights. Find the technical paper here or here.  Published June 2025. Advising Professor : Prof. Joungho Kim. Fig. 1: Thermal Manag... » read more

The Best DRAMs For Artificial Intelligence


Artificial intelligence (AI) involves intense computing and tons of data. The computing may be performed by CPUs, GPUs, or dedicated accelerators, and while the data travels through DRAM on its way to the processor, the best DRAM type for this purpose depends on the type of system that is performing the training or inference. The memory challenge facing engineering teams today is how to keep... » read more

Arithmetic Intensity In Decoding: A Hardware-Efficient Perspective (Princeton University)


A new technical paper titled "Hardware-Efficient Attention for Fast Decoding" was published by researchers at Princeton University. Abstract "LLM decoding is bottlenecked for large batches and long contexts by loading the key-value (KV) cache from high-bandwidth memory, which inflates per-token latency, while the sequential nature of decoding limits parallelism. We analyze the interplay amo... » read more

HBM4 Elevates AI Training Performance To New Heights


Generative and Agentic AI are pushing an extremely rapid evolution of computing technology. With leading-edge LLMs now in excess of a trillion parameters, training takes an enormous amount of computing capacity, and state-of-the-art training clusters can employ more than 100,000 GPUs. High Bandwidth Memory (HBM) provides the vast memory bandwidth and capacity needed for these demanding AI train... » read more

Three-Way Race To 3D-ICs


Intel Foundry, TSMC, and Samsung Foundry are scrambling to deliver all the foundational components of full 3D-ICs, which collectively will deliver orders of magnitude improvements in performance with minimal power sometime within the next few years. Much attention has been focused on process node advances, but a successful 3D-IC implementation is much more complex and comprehensive than just... » read more

2030 Data Center AI Chip Winners: The Trillion Dollar Club


At the start of 2025, I believed AI was overhyped, ASICs were a niche, and a market pullback was inevitable. My long-term view has changed dramatically. AI technology and adoption is accelerating at an astonishing pace. One of the GenAI/LLM leaders, or Nvidia, will be the first $10 Trillion market cap company by 2030. Large language models (LLMs) are rapidly improving in both capability and ... » read more

The Rise Of Thin Wafer Processing


The shift from planar SoCs to 3D-ICs and advanced packages requires much thinner wafers in order to improve performance and reduce power, reducing the distance that signals need to travel and the amount of energy needed to drive them. Markets calling for ultrathin wafers are growing. The combined thickness of an HBM module with 12 DRAM dies and a base logic chip is still less than that of a ... » read more

3D-IC For The Masses


The concepts of 3D-IC and chiplets have the whole industry excited. It potentially marks the next stage in the evolution of the IP industry, but so far, technical difficulties and cost have curtailed its usage to just a handful of companies. Even within those, they do not appear to be seeing benefits from heterogeneous integration or reuse. Attempts to make this happen are not new. "A decade... » read more

Speeding Down Memory Lane With Custom HBM


With the goal of increasing system performance per watt, the semiconductor industry is always seeking innovative solutions that go beyond the usual approaches of increasing memory capacity and data rates. Over the last decade, the High Bandwidth Memory (HBM) protocol has proven to be a popular choice for data center and high-performance computing (HPC) applications. Even more benefit can be rea... » read more

← Older posts