Home
TECHNICAL PAPERS

An Energy-Efficient 10T SRAM In-Memory Computing Macro Architecture For AI Edge Processor

popularity

A technical paper titled “An energy-efficient 10T SRAM in-memory computing macro for artificial intelligence edge processor” was published by researchers at Atal Bihari Vajpayee-Indian Institute of Information Technology and Management (ABV-IIITM).

Abstract:

“In-Memory Computing (IMC) is emerging as a new paradigm to address the von-Neumann bottleneck (VNB) in data-intensive applications. In this paper, an energy-efficient 10T SRAM-based IMC macro architecture is proposed to perform logic, arithmetic, and In-memory Dot Product (IMDP) operations. The write and read margins of the proposed 10T SRAM are improved by 40% and 2.5%, respectively, compared to the 9T SRAM. The write energy and leakage power of the proposed 10T SRAM are reduced by 89% and 96.6%, respectively, with similar read energy compared to 9T SRAM. Additionally, a 4 Kb SRAM array based on 10T SRAM is implemented in 180-nm SCL technology to analyze the operation and performance of the proposed IMC macro architecture. The proposed IMC architecture achieves an energy efficiency of 5.3 TOPS/W for 1-bit logic, 4.1 TOPS/W for 1-bit addition, and 3.1 TOPS/W for IMDP operations at 1.8 V and 60 MHz. The area efficiency of 65.2% is achieved for a 136 × 32 array of proposed IMC macro architecture. Further, the proposed IMC macro is also tested for accelerating the IMDP operation of neural networks by importing linearity variation analysis in Tensorflow for image classification on MNIST and CIFAR datasets. According to Monte-Carlo simulations, the IMDP operation has a standard deviation of 0.07 percent in accumulation, equating to a classification accuracy of 97.02% on the MNIST dataset and 88.39% on the CIFAR dataset.”

Find the technical paper here. Published August 2023 (preprint).

Rajput, Anil Kumar, Manisha Pattanaik, and Gaurav Kaushal. “An energy-efficient 10T SRAM in-memory computing macro for artificial intelligence edge processor.” Memories-Materials, Devices, Circuits and Systems (2023): 100076.

Related Reading
Processor Tradeoffs For AI Workloads
Gaps are widening between technology advances and demands, and closing them is becoming more difficult.
Static Random Access Memory (SRAM) Knowledge Center



Leave a Reply


(Note: This name will be displayed publicly)