Home
TECHNICAL PAPERS

Using Sparseloop in Hardware Accelerator Design Flows (MIT)

popularity

A technical paper titled “Sparseloop: An Analytical Approach To Sparse Tensor Accelerator Modeling” was published by researchers at MIT and NVIDIA.  The paper won “Distinguished Artifact Award” at the MICRO 2022 conference.

Find the technical paper here.  Published 2022.  Project website is here and github here.

Abstract: “In recent years, many accelerators have been proposed to efficiently process sparse tensor algebra applications (e.g., sparse neural networks). However, these proposals are single points in a large and diverse design space. The lack of systematic description and modeling support for these sparse tensor accelerators impedes hardware designers from efficient and effective design space exploration. This paper first presents a unified taxonomy to systematically describe the diverse sparse tensor accelerator design space. Based on the proposed taxonomy, it then introduces Sparseloop, the first fast, accurate, and flexible analytical modeling framework to enable early-stage evaluation and exploration of sparse tensor accelerators. Sparseloop comprehends a large set of architecture specifications, including various dataflows and sparse acceleration features (e.g., elimination of zero-based compute). Using these specifications, Sparseloop evaluates a design’s processing speed and energy efficiency while accounting for data movement and compute incurred by the employed dataflow, including the savings and overhead introduced by the sparse acceleration features using stochastic density models. Across representative accelerator designs and workloads, Sparseloop achieves over 2000× faster modeling speed than cycle-level simulations, maintains relative performance trends, and achieves 0.1% to 8% average error. The paper also presents example use cases of Sparseloop in different accelerator design flows to reveal important design insights.”

Citation: Y. N. Wu, P. Tsai, A. Parashar, V. Sze, J. Emer, “Sparseloop: An Analytical Approach to Sparse Tensor Accelerator Modeling,” ACM/IEEE International Symposium on Microarchitecture (MICRO), October 2022.

Related Reading
How To Optimize A Processor
There are at least three architectural layers to processor design, each of which plays a significant role.
Chip Design Shifts As Fundamental Laws Run Out Of Steam
How prepared the EDA community is to address upcoming challenges isn’t clear.
IC Architectures Shift As OEMs Narrow Their Focus
As chip companies customize designs, the number of possible pitfalls is growing. Tighter partnerships and acquisitions may help.
Complex Tradeoffs In Inferencing ChipsOne size does not fit all, and each processor type has its own benefits and drawbacks.



Leave a Reply


(Note: This name will be displayed publicly)