A Hierarchical And Tractable Mixed-Signal Verification Methodology For First-Generation Analog AI Processors


Artificial intelligence (AI) is now the key driving force behind advances in information technology, big data and the internet of things (IoT). It is a technology that is developing at a rapid pace, particularly when it comes to the field of deep learning. Researchers are continually creating new variants of deep learning that expand the capabilities of machine learning. But building systems th... » read more

Co-Design View of Cross-Bar Based Compute-In-Memory


A new review paper titled "Compute in-Memory with Non-Volatile Elements for Neural Networks: A Review from a Co-Design Perspective" was published by researchers at Argonne National Lab, Purdue University, and Indian Institute of Technology Madras. "With an over-arching co-design viewpoint, this review assesses the use of cross-bar based CIM for neural networks, connecting the material proper... » read more

Research Bits: Jan. 24


Transistor-free compute-in-memory Researchers from the University of Pennsylvania, Sandia National Laboratories, and Brookhaven National Laboratory propose a transistor-free compute-in-memory (CIM) architecture to overcome memory bottlenecks and reduce power consumption in AI workloads. "Even when used in a compute-in-memory architecture, transistors compromise the access time of data," sai... » read more

Transistor-Free Compute-In-Memory Architecture


A new technical paper titled "Reconfigurable Compute-In-Memory on Field-Programmable Ferroelectric Diodes" was recently published by researchers at University of Pennsylvania, Sandia National Labs, and Brookhaven National Lab. The compute-in-memory design is different as it is completely transistor-free. “Even when used in a compute-in-memory architecture, transistors compromise the access... » read more

Simulation Framework to Evaluate the Feasibility of Large-scale DNNs based on CIM Architecture & Analog NVM


Technical paper titled "Accuracy and Resiliency of Analog Compute-in-Memory Inference Engines" from researchers at UCLA. Abstract "Recently, analog compute-in-memory (CIM) architectures based on emerging analog non-volatile memory (NVM) technologies have been explored for deep neural networks (DNNs) to improve scalability, speed, and energy efficiency. Such architectures, however, leverage ... » read more

SOT-MRAM-based CIM architecture for a CNN model


New research paper "In-Memory Computing Architecture for a Convolutional Neural Network Based on Spin Orbit Torque MRAM", from National Taiwan University, Feng Chia University, Chung Yuan Christian University. Abstract "Recently, numerous studies have investigated computing in-memory (CIM) architectures for neural networks to overcome memory bottlenecks. Because of its low delay, high energ... » read more

Nonvolatile Capacitive Crossbar Array for In-Memory Computing


Abstract "Conventional resistive crossbar array for in-memory computing suffers from high static current/power, serious IR drop, and sneak paths. In contrast, the “capacitive” crossbar array that harnesses transient current and charge transfer is gaining attention as it 1) only consumes dynamic power, 2) has no DC sneak paths and avoids severe IR drop (thus, selector-free), and 3) can be f... » read more

NeuroSim Simulator for Compute-in-Memory Hardware Accelerator: Validation and Benchmark


Abstract:   "Compute-in-memory (CIM) is an attractive solution to process the extensive workloads of multiply-and-accumulate (MAC) operations in deep neural network (DNN) hardware accelerators. A simulator with options of various mainstream and emerging memory technologies, architectures, and networks can be a great convenience for fast early-stage design space exploration of CIM hardw... » read more

Power/Performance Bits: Dec. 7


Logic-in-memory with MoS2 Engineers at École Polytechnique Fédérale de Lausanne (EPFL) built a logic-in-memory device using molybdenum disulfide (MoS2) as the channel material. MoS2 is a three-atom-thick 2D material and excellent semiconductor. The new chip is based on floating-gate field-effect transistors (FGFETs) that can hold electric charges for long periods. MoS2 is particularly se... » read more

Spiking Neural Networks Place Data In Time


Artificial neural networks have found a variety of commercial applications, from facial recognition to recommendation engines. Compute-in-memory accelerators seek to improve the computational efficiency of these networks by helping to overcome the von Neumann bottleneck. But the success of artificial neural networks also highlights their inadequacies. They replicate only a small subset of th... » read more

← Older posts Newer posts →