Redefining XPU Memory For AI Data Centers Through Custom HBM4: Part 1


This is the first of a three-part series on HBM4 and gives an overview of the HBM standard. Part 2 will provide insights on HBM implementation challenges, and part 3 will introduce the concept of a custom HBM implementation. Relentless growth in data consumption Recent advances in deep learning have had a transformative effect on artificial intelligence (AI) and the ever-increasing volume of ... » read more

HBM Options Increase As AI Demand Soars


High-bandwidth memory (HBM) sales are spiking as the amount of data that needs to be processed quickly by state-of-the-art AI accelerators, graphic processing units, and high-performance computing applications continues to explode. HBM inventories are sold out, driven by massive efforts and investments in developing and improving large language models such as ChatGPT. HBM is the memory of ch... » read more

Extending The DDR5 Roadmap With MRDIMM


Given the voracious memory bandwidth and capacity demands of Gen AI and other advanced workloads, we’ve seen a rapid progression through the generations of DDR5 memory. Multiplexed Registered DIMMs (MRDIMMs) offer a new memory module architecture capable of extending the DDR5 roadmap and expanding the capabilities of server main memory. MRDIMM reuses the lion’s share of existing DDR5 infras... » read more

GDDR7 Memory Supercharges AI Inference


GDDR7 is the state-of-the-art graphics memory solution with a performance roadmap of up to 48 Gigatransfers per second (GT/s) and memory throughput of 192 GB/s per GDDR7 memory device. The next generation of GPUs and accelerators for AI inference will use GDDR7 memory to provide the memory bandwidth needed for these demanding workloads. AI is two applications: training and inference. With tr... » read more

Research Bits: Sept. 24


Modeling negative capacitance Researchers from Lawrence Berkeley National Laboratory developed an open-source 3D simulation framework capable of modeling the atomistic origins of negative capacitance in ferroelectric thin films at the device level. When a material has negative capacitance, it can store a greater amount of electrical charge at lower voltages. The team believes the FerroX fra... » read more

HBM4 Feeds Generative AI’s Hunger For More Memory Bandwidth


Generative AI (Gen AI), built on the exponential growth of Large Language Models (LLMs) and their kin, is one of today’s biggest drivers of computing technology. Leading-edge LLMs now exceed a trillion parameters and offer multimodal capabilities so they can take a broad range of inputs, whether they’re in the form of text, speech, images, video, code, and more, and generate an equally broa... » read more

DDR5 12.8Gbps MRDIMM IP: Powering The Future Of AI, HPC, And Data Centers


The demand for higher-performance computing is greater than ever. Cutting-edge applications in artificial intelligence (AI), big data analytics, and databases require high-speed memory systems to handle the ever-increasing volumes and complexities of data. Advancements in cloud computing and machine virtualization are stretching the limits of current capabilities. AI applications hosted in the ... » read more

GDDR7: The Ideal Memory Solution In AI Inference


The generative AI market is experiencing rapid growth, driven by the increasing parameter size of Large Language Models (LLMs). This growth is pushing the boundaries of performance requirements for training hardware within data centers. For an in-depth look at this, consider the insights provided in "HBM3E: All About Bandwidth". Once trained, these models are deployed across a diverse range of... » read more

Memory Implications Of Gen AI In Gaming


The global gaming market across hardware, software and services is on track to exceed annual revenues of $500B in 2025.1 That’s bigger by an order of magnitude than the combination of movies and music. On the cutting edge of that enormous market is open world gaming, where the driving goal is to give players the freedom to do anything they can imagine in a coherent and immersive environment. ... » read more

Are You Ready For HBM4? A Silicon Lifecycle Management (SLM) Perspective


Many factors are driving system-on-chip (SoC) developers to adopt multi-die technology, in which multiple dies are stacked in a three-dimensional (3D) configuration. Multi-die systems may make power and thermal issues more complex, and they have required major innovations in electronic design automation (EDA) implementation and test tools. These challenges are more than offset by the advantages... » read more

← Older posts