SOT-MRAM-based CIM architecture for a CNN model


New research paper "In-Memory Computing Architecture for a Convolutional Neural Network Based on Spin Orbit Torque MRAM", from National Taiwan University, Feng Chia University, Chung Yuan Christian University. Abstract "Recently, numerous studies have investigated computing in-memory (CIM) architectures for neural networks to overcome memory bottlenecks. Because of its low delay, high energ... » read more

Nonvolatile Capacitive Crossbar Array for In-Memory Computing


Abstract "Conventional resistive crossbar array for in-memory computing suffers from high static current/power, serious IR drop, and sneak paths. In contrast, the “capacitive” crossbar array that harnesses transient current and charge transfer is gaining attention as it 1) only consumes dynamic power, 2) has no DC sneak paths and avoids severe IR drop (thus, selector-free), and 3) can be f... » read more

NeuroSim Simulator for Compute-in-Memory Hardware Accelerator: Validation and Benchmark


Abstract:   "Compute-in-memory (CIM) is an attractive solution to process the extensive workloads of multiply-and-accumulate (MAC) operations in deep neural network (DNN) hardware accelerators. A simulator with options of various mainstream and emerging memory technologies, architectures, and networks can be a great convenience for fast early-stage design space exploration of CIM hardw... » read more

Power/Performance Bits: Dec. 7


Logic-in-memory with MoS2 Engineers at École Polytechnique Fédérale de Lausanne (EPFL) built a logic-in-memory device using molybdenum disulfide (MoS2) as the channel material. MoS2 is a three-atom-thick 2D material and excellent semiconductor. The new chip is based on floating-gate field-effect transistors (FGFETs) that can hold electric charges for long periods. MoS2 is particularly se... » read more

Spiking Neural Networks Place Data In Time


Artificial neural networks have found a variety of commercial applications, from facial recognition to recommendation engines. Compute-in-memory accelerators seek to improve the computational efficiency of these networks by helping to overcome the von Neumann bottleneck. But the success of artificial neural networks also highlights their inadequacies. They replicate only a small subset of th... » read more

Compute-In Memory Accelerators Up-End Network Design Tradeoffs


An explosion in the amount of data, coupled with the negative impact on performance and power for moving that data, is rekindling interest around in-memory processing as an alternative to moving data back and forth between the memory and the processor. Compute-in-memory (CIM) arrays based on either conventional memory elements like DRAM and NAND flash, as well as emerging non-volatile memori... » read more

Newer posts →