Home
TECHNICAL PAPERS

DL Compiler for Efficiently Utilizing Inter-Core Connected AI Chips (UIUC, Microsoft)

popularity

A new technical paper titled “Scaling Deep Learning Computation over the Inter-Core Connected Intelligence Processor” was published by researchers at UIUC and Microsoft Research.

Abstract
“As AI chips incorporate numerous parallelized cores to scale deep learning (DL) computing, inter-core communication is enabled recently by employing high-bandwidth and low-latency interconnect links on the chip (e.g., Graphcore IPU). It allows each core to directly access the fast scratchpad memory in other cores, which enables new parallel computing paradigms. However, without proper support for the scalable inter-core connections in current DL compilers, it is hard for developers to exploit the benefits of this new architecture.
We present T10, the first DL compiler to exploit the inter-core communication bandwidth and distributed on-chip memory on AI chips. To formulate the computation and communication patterns of tensor operators in this new architecture, T10 introduces a distributed tensor abstraction rTensor. T10 maps a DNN model to execution plans with a generalized compute-shift pattern, by partitioning DNN computation into sub-operators and mapping them to cores, so that the cores can exchange data following predictable patterns. T10 makes globally optimized trade-offs between on-chip memory consumption and inter-core communication overhead, selects the best execution plan from a vast optimization space, and alleviates unnecessary inter-core communications. Our evaluation with a real inter-core connected AI chip, the Graphcore IPU, shows up to 3.3× performance improvement, and scalability support for larger models, compared to state-of-the-art DL compilers and vendor libraries.”

Read the technical paper here. Preprint August 2024.

Liu, Yiqi, Yuqi Xue, Yu Cheng, Lingxiao Ma, Ziming Miao, Jilong Xue, and Jian Huang. “Scaling Deep Learning Computation over the Inter-Core Connected Intelligence Processor.” arXiv preprint arXiv:2408.04808 (2024).



Leave a Reply


(Note: This name will be displayed publicly)