Efficient Neuromorphic AI Chip: “NeuroRRAM”


New technical paper titled “A compute-in-memory chip based on resistive random-access memory” was published by a team of international researchers at Stanford, UCSD, University of Pittsburgh, University of Notre Dame and Tsinghua University.

The paper’s abstract states “by co-optimizing across all hierarchies of the design from algorithms and architecture to circuits and devices, we present NeuRRAM—a RRAM-based CIM chip that simultaneously delivers versatility in reconfiguring CIM cores for diverse model architectures, energy efficiency that is two-times better than previous state-of-the-art RRAM-CIM chips across various computational bit-precisions, and inference accuracy comparable to software models quantized to four-bit weights across various AI tasks, including accuracy of 99.0 percent on MNIST18 and 85.7 percent on CIFAR-1019 image classification, 84.7-percent accuracy on Google speech command recognition, and a 70-percent reduction in image-reconstruction error on a Bayesian image-recovery task.”

Find the technical paper here and the UCSD news article here. Published August 2022.

Wan, W., Kubendran, R., Schaefer, C. et al. A compute-in-memory chip based on resistive random-access memory. Nature 608, 504–512 (2022). https://doi.org/10.1038/s41586-022-04992-8.

Shifting Toward Data-Driven Chip Architectures
Rethinking how to improve performance and lower power in semiconductors.
Making Sense Of New Edge-Inference Architectures
How to navigate a flood of confusing choices and terminology.
New Uses For AI In Chips
ML/DL is increasing design complexity at the edge, but it’s also adding new options for improving power and performance.

Leave a Reply

(Note: This name will be displayed publicly)