A technical paper titled “Low-Power Charge Trap Flash Memory with MoS2 Channel for High-Density In-Memory Computing” was published by researchers at Kyungpook National University, Sungkyunkwan University, Dankook University, and Kwangwoon University.
“With the rise of on-device artificial intelligence (AI) technology, the demand for in-memory computing has surged for data-intensive tasks on edge devices. However, on-device AI requires high-density, low-power memory-based computing to efficiently handle large data volumes. Here, this study proposes a reliable multilevel, high gate-coupling ratio memory device with MoS2 channel tailored for high-density 3D NAND Flash-based in-memory computing. The MoS2 channel, featured by its small bandgap and high-mobility, facilitates reliable memory window of approximately 8 V thanks to erase operation through hole injection. This not only suppresses vertical charge loss but also alleviates the burden on voltage generator circuits, indicating the suitability of MoS2 as channel material for 3D NAND Flash architecture. Additionally, a low-k (≈2.2) tunneling layer deposited via initiated chemical vapor deposition increases the gate-coupling ratio, thereby reducing the operating voltage. Utilizing Au nanoparticles as the charge storage layer, MoS2 memory devices show synaptic plasticity with 6-bit, endurance (104 cycles), read disturbance (105 cycles), and retention times (105 s). Furthermore, device-to-system simulations for neural networks based on MoS2-memory devices have successfully achieved a fingerprint recognition of 95.8%. These results provide the foundation to develop multi-bit MoS2-memory devices for AI accelerators and 3D NAND Flash memory.”
Find the technical paper here. Published June 2024.
Kim, Yeong Kwon, Sangyong Park, Junhwan Choi, Hamin Park, and Byung Chul Jang. “Low‐Power Charge Trap Flash Memory with MoS2 Channel for High‐Density In‐Memory Computing.” Advanced Functional Materials: 2405670.
Related Reading
3D Integration Supports CIM Versatility And Accuracy
Solving compute-in-memory’s limitations requires new approaches and dimensions.
Modeling Compute In Memory With Biological Efficiency
Generative AI forces chipmakers to use compute resources more intelligently.
The Uncertain Future Of In-Memory Compute
The answer may depend on whether SRAM can shrink further.
Leave a Reply