Efficient neural net training; molybdenum ditelluride RRAM; fast STT-MRAM.
Efficient neural net training
Researchers from the University of California San Diego and Adesto Technologies teamed up to improve neural network training efficiency with new hardware and algorithms that allow computation to be performed in memory.
The team used an energy-efficient spiking neural network for implementing unsupervised learning in hardware. Spiking neural networks more closely mirror biological processes. Rather than updating every weight continuously, only the weights tied to highly active, or spiking, neurons are updated, reducing the amount of computation needed.
The hardware component is a 512 kilobit subquantum Conductive Bridging RAM (CBRAM) array, based on Adesto’s technology. The non-volatile memory consumes 10 to 100 times less energy than today’s leading memory technologies. While typically used as a digital storage device, here it was programmed to have multiple analog states to emulate biological synapses in the human brain.
“On-chip memory in conventional processors is very limited, so they don’t have enough capacity to perform both computing and storage on the same chip. But in this approach, we have a high capacity memory array that can do computation related to neural network training in the memory without data transfer to an external processor. This will enable a lot of performance gains and reduce energy consumption during training,” said Duygu Kuzum, a professor of electrical and computer engineering at UC San Diego.
Additionally, the team developed a “soft-pruning” algorithm that finds weights that have already matured during training and then sets them to a constant non-zero value. This stops them from getting updated for the remainder of the training to minimize computing power.
Unlike conventional pruning, which completely removes redundant or unimportant weights, this soft-pruning method retains them in a low energy setting, which helps to improve the network’s accuracy.
To test the network and hardware, it was trained to classify handwritten digits from the MNIST database. The network classified digits with 93% accuracy even when up to 75% of the weights were soft pruned. In comparison, the network performed with less than 90% accuracy when only 40% of the weights were pruned using conventional pruning methods.
The team also expects significant power consumption reduction.
“If we benchmark the new hardware to other similar memory technologies, we estimate our device can cut energy consumption 10 to 100 times, then our algorithm co-design cuts that by another 10. Overall, we can expect a gain of a hundred to a thousand fold in terms of energy consumption following our approach,” said Kuzum.
Next, the team plans to work with memory companies to develop a complete system in which neural networks can be trained in memory to do more complex tasks with very low power and time budgets.
Molybdenum ditelluride RRAM
Researchers at Purdue University, National Institute of Standards and Technology (NIST), and Theiss Research Inc. built a resistive RAM cell out of the 2D material molybdenum ditelluride, which they believe holds properties that could make it an ideal candidate for the next-gen memory.
In RRAM, an electrical current is typically driven through a memory cell made up of stacked materials, creating a change in resistance that records data as 0s and 1s in memory. However, materials currently used have shown to be too unreliable to store and retrieve data over trillions of cycles. The team argues that molybdenum ditelluride could change that.
“We haven’t yet explored system fatigue using this new material, but our hope is that it is both faster and more reliable than other approaches due to the unique switching mechanism we’ve observed,” said Joerg Appenzeller, Purdue University’s professor of electrical and computer engineering and the scientific director of nanoelectronics at the Birck Nanotechnology Center.
Molybdenum ditelluride allows a system to switch more quickly between 0 and 1, potentially increasing the rate of storing and retrieving information. This is because when an electric field is applied to the cell, atoms are displaced by a tiny distance, resulting in a state of high resistance, noted as 0, or a state of low resistance, noted as 1, which can occur much faster than switching in conventional RRAM devices, according to the researchers. “Because less power is needed for these resistive states to change, a battery could last longer,” Appenzeller said.
Next, the researchers want to build a stacked memory cell that also incorporates interconnects and logic. “Logic and interconnects drain battery too, so the advantage of an entirely two-dimensional architecture is more functionality within a small space and better communication between memory and logic,” Appenzeller said. The team has applied for two patents on the technology.
Fast STT-MRAM
Researchers at Tohoku University developed 128Mb-density STT-MRAM with a write speed of 14 nanoseconds for use in embedded memory applications, currently the fastest write speed for STT-MRAM with a density over 100Mb.
The current capacity of STT-MRAM is ranged between 8Mb-40Mb, according to the researchers. To increase the memory density, necessary for commercialization, they turned to developing STT-MRAMs in which miniaturized magnetic tunnel junctions (MTJs) are integrated with CMOS. This approach has the added benefit of reducing power consumption.
MTJs were miniaturized through a series of process developments. To reduce the memory size needed for higher-density STT-MRAM, the MTJs were formed directly on via holes. With the reduced size memory cell, the team designed 128Mb-density STT-MRAM and fabricated a chip.
With the fabricated chip, the researchers measured a write speed of subarray, which showed high-speed operation of 14ns at a low power supply voltage of 1.2 V.
The team hopes the research will pave the way for the mass-production of large capacity STT-MRAM.
Leave a Reply