Home
TECHNICAL PAPERS

HW-based Heterogeneous Memory Management for LLM Inferencing (KAIST, Stanford Unversity)

popularity

A new technical paper titled “Hardware-based Heterogeneous Memory Management for Large Language Model Inference” was published by researchers at KAIST and Stanford University.

Abstract
“A large language model (LLM) is one of the most important emerging machine learning applications nowadays. However, due to its huge model size and runtime increase of the memory footprint, LLM inferences suffer from the lack of memory capacity in conventional systems consisting of multiple GPUs with a modest amount of high bandwidth memory. Moreover, since LLM contains many bandwidthintensive kernels, only focusing on the memory capacity without considering the bandwidth incurs a serious performance degradation. To handle such conflicting memory capacity and bandwidth demands in a cost-effective way, this study investigates the potential of heterogeneous memory systems, proposing H2M2. It uses an asymmetric memory architecture consisting of capacity-centric and bandwidthcentric memory with computation units attached to each memory device. With the asymmetric memory, we first analyze the effect of kernel-memory mapping for the asymmetric memory. Second, we propose a dynamic runtime algorithm that finds a mapping solution considering the characteristics of LLM operations and the change of footprint during LLM inference. Third, we advocate the need for memory abstraction for the efficient management of the asymmetric memory. H2M2 outperforms the conventional homogeneous memory system with LPDDR by 1.46x, 1.55x, and 2.94x speedup in GPT3-175B, Chinchilla-70B, and Llama2-70B, respectively.”

Find the technical paper here.  April 2025.

arXiv:2504.14893v1]
Authors:Soojin Hwang, Jungwoo Kim, Sanghyeon Lee, Hongbeen Kim, Jaehyuk Huh



Leave a Reply


(Note: This name will be displayed publicly)