A new technical paper titled “Reconfigurable Compute-In-Memory on Field-Programmable Ferroelectric Diodes” was recently published by researchers at University of Pennsylvania, Sandia National Labs, and Brookhaven National Lab.
The compute-in-memory design is different as it is completely transistor-free. “Even when used in a compute-in-memory architecture, transistors compromise the access time of data,” says researcher Deep Jariwala in this University of Pennsylvania news writeup. “They require a lot of wiring in the overall circuitry of a chip and thus use time, space and energy in excess of what we would want for AI applications. The beauty of our transistor-free design is that it is simple, small and quick and it requires very little energy.”
Find the technical paper here. Published September 2022.
Citation:
Nano Lett. 2022, 22, 18, 7690–7698
Publication Date:September 19, 2022
https://doi.org/10.1021/acs.nanolett.2c03169
Related Reading
How To Optimize A Processor
There are at least three architectural layers to processor design, each of which plays a significant role.
How Memory Design Optimizes System Performance
Changes are steady in the memory hierarchy, but how and where that memory is accessed is having a big impact.
Efficient Neuromorphic AI Chip: “NeuroRRAM”
Leave a Reply