Architectural Considerations For Compute-In-Memory In AI Inference


Can Compute-in-Memory (CIM) bring new benefits to AI (Artificial Intelligence) inference? CIM is not an AI solution; rather, it is a memory management solution. CIM could bring advantages to AI processing by speeding up the multiplication operation at the heart of AI model execution. To read more, click here. » read more

Can Compute-In-Memory Bring New Benefits To Artificial Intelligence Inference?


Compute-in-memory (CIM) is not necessarily an Artificial Intelligence (AI) solution; rather, it is a memory management solution. CIM could bring advantages to AI processing by speeding up the multiplication operation at the heart of AI model execution. However, for that to be successful, an AI processing system would need to be explicitly architected to use CIM. The change would entail a shift ... » read more

A Hierarchical And Tractable Mixed-Signal Verification Methodology For First-Generation Analog AI Processors


Artificial intelligence (AI) is now the key driving force behind advances in information technology, big data and the internet of things (IoT). It is a technology that is developing at a rapid pace, particularly when it comes to the field of deep learning. Researchers are continually creating new variants of deep learning that expand the capabilities of machine learning. But building systems th... » read more

Co-Design View of Cross-Bar Based Compute-In-Memory


A new review paper titled "Compute in-Memory with Non-Volatile Elements for Neural Networks: A Review from a Co-Design Perspective" was published by researchers at Argonne National Lab, Purdue University, and Indian Institute of Technology Madras. "With an over-arching co-design viewpoint, this review assesses the use of cross-bar based CIM for neural networks, connecting the material proper... » read more

Research Bits: Jan. 24


Transistor-free compute-in-memory Researchers from the University of Pennsylvania, Sandia National Laboratories, and Brookhaven National Laboratory propose a transistor-free compute-in-memory (CIM) architecture to overcome memory bottlenecks and reduce power consumption in AI workloads. "Even when used in a compute-in-memory architecture, transistors compromise the access time of data," sai... » read more

Transistor-Free Compute-In-Memory Architecture


A new technical paper titled "Reconfigurable Compute-In-Memory on Field-Programmable Ferroelectric Diodes" was recently published by researchers at University of Pennsylvania, Sandia National Labs, and Brookhaven National Lab. The compute-in-memory design is different as it is completely transistor-free. “Even when used in a compute-in-memory architecture, transistors compromise the access... » read more

Simulation Framework to Evaluate the Feasibility of Large-scale DNNs based on CIM Architecture & Analog NVM


Technical paper titled "Accuracy and Resiliency of Analog Compute-in-Memory Inference Engines" from researchers at UCLA. Abstract "Recently, analog compute-in-memory (CIM) architectures based on emerging analog non-volatile memory (NVM) technologies have been explored for deep neural networks (DNNs) to improve scalability, speed, and energy efficiency. Such architectures, however, leverage ... » read more

SOT-MRAM-based CIM architecture for a CNN model


New research paper "In-Memory Computing Architecture for a Convolutional Neural Network Based on Spin Orbit Torque MRAM", from National Taiwan University, Feng Chia University, Chung Yuan Christian University. Abstract "Recently, numerous studies have investigated computing in-memory (CIM) architectures for neural networks to overcome memory bottlenecks. Because of its low delay, high energ... » read more

Nonvolatile Capacitive Crossbar Array for In-Memory Computing


Abstract "Conventional resistive crossbar array for in-memory computing suffers from high static current/power, serious IR drop, and sneak paths. In contrast, the “capacitive” crossbar array that harnesses transient current and charge transfer is gaining attention as it 1) only consumes dynamic power, 2) has no DC sneak paths and avoids severe IR drop (thus, selector-free), and 3) can be f... » read more

NeuroSim Simulator for Compute-in-Memory Hardware Accelerator: Validation and Benchmark


Abstract:   "Compute-in-memory (CIM) is an attractive solution to process the extensive workloads of multiply-and-accumulate (MAC) operations in deep neural network (DNN) hardware accelerators. A simulator with options of various mainstream and emerging memory technologies, architectures, and networks can be a great convenience for fast early-stage design space exploration of CIM hardw... » read more

← Older posts Newer posts →