Home
TECHNICAL PAPERS

SOT-MRAM-based CIM architecture for a CNN model

popularity

New research paper “In-Memory Computing Architecture for a Convolutional Neural Network Based on Spin Orbit Torque MRAM”, from National Taiwan University, Feng Chia University, Chung Yuan Christian University.

Abstract
“Recently, numerous studies have investigated computing in-memory (CIM) architectures for neural networks to overcome memory bottlenecks. Because of its low delay, high energy efficiency, and low volatility, spin-orbit torque magnetic random access memory (SOT-MRAM) has received substantial attention. However, previous studies used calculation circuits to support complex calculations, leading to substantial energy consumption. Therefore, our research proposes a new CIM architecture with small peripheral circuits; this architecture achieved higher performance relative to other CIM architectures when processing convolution neural networks (CNNs). We included a distributed arithmetic (DA) algorithm to improve the efficiency of the CIM calculation method by reducing the excessive read/write times and execution steps of CIM-based CNN calculation circuits. Furthermore, our method also uses SOT-MRAM to increase the calculation speed and reduce power consumption. Compared with CIM-based CNN arithmetic circuits in previous studies, our method can achieve shorter clock periods and reduce read times by up to 43.3% without the need for additional circuits.”

Find the open access technical paper here. Published April 2022.

Huang, J.-Y.; Syu, J.-L.; Tsou, Y.-T.; Kuo, S.-Y.; Chang, C.-R. In-Memory Computing Architecture for a Convolutional Neural Network Based on Spin Orbit Torque MRAM. Electronics 2022, 11, 1245. https://doi.org/10.3390/electronics11081245

Visit Semiconductor Engineering’s Technical Paper library here and discover many more chip industry academic papers.



Leave a Reply


(Note: This name will be displayed publicly)