Power/Performance Bits: Jan. 26

Neural networks on MCUs; lighting up AI; tiny memristor.

popularity

Neural networks on MCUs
Researchers at MIT are working to bring neural networks to Internet of Things devices. The team’s MCUNet is a system that designs compact neural networks for deep learning on microcontrollers with limited memory and processing power.

MCUNet is made up of two components. One is TinyEngine, an inference engine that directs resource management. TinyEngine is optimized to run a particular neural network structure, which is selected by MCUNet’s other component, TinyNAS, a neural architecture search algorithm.

Existing neural architecture search techniques start with a big pool of possible network structures based on a predefined template, then they gradually find the one with high accuracy and low cost. While the method works, it’s not the most efficient. “It can work pretty well for GPUs or smartphones,” said Ji Lin, a PhD student in MIT’s Department of Electrical Engineering and Computer Science. “But it’s been difficult to directly apply these techniques to tiny microcontrollers, because they are too small.”

TinyNAS can create custom-sized networks. “We have a lot of microcontrollers that come with different power capacities and different memory sizes,” said Lin. “So we developed the algorithm [TinyNAS] to optimize the search space for different microcontrollers. Then we deliver the final, efficient model to the microcontroller.”

TinyEngine generates the essential code necessary to run TinyNAS’ customized neural network. Any deadweight code is discarded, which cuts down on compile-time. “We keep only what we need,” said Song Han, an assistant professor in MIT’s Department of Electrical Engineering and Computer Science. “And since we designed the neural network, we know exactly what we need. That’s the advantage of system-algorithm codesign.”

In the group’s tests of TinyEngine, the size of the compiled binary code was between 1.9 and five times smaller than comparable microcontroller inference engines from Google and Arm. TinyEngine also contains innovations that reduce runtime, including in-place depth-wise convolution, which they said cuts peak memory usage nearly in half.

In tests, the ImageNet database was used to train the system with labeled images, then to test its ability to classify novel ones. On a commercial microcontroller they tested, MCUNet successfully classified 70.7% of the novel images, a rate that stayed steady across different microcontrollers.

Lighting up AI
Researchers from RMIT University, Colorado State University, Northeast Normal University, and University of California Berkeley developed a light-modulated neuromorphic device that combines memory and signal processing capability.

“Our new technology radically boosts efficiency and accuracy by bringing multiple components and functionalities into a single platform,” said Sumeet Walia, an associate professor at RMIT who also co-leads the Functional Materials and Microsystems Research Group. “It’s getting us closer to an all-in-one AI device inspired by nature’s greatest computing innovation – the human brain. Our aim is to replicate a core feature of how the brain learns, through imprinting vision as memory. The prototype we’ve developed is a major leap forward towards neurorobotics, better technologies for human-machine interaction and scalable bionic systems.”

Inspired by optogenetics, the chip is based on layered black phosphorus, which changes electrical resistance in response to different wavelengths of light. Functionalities such as imaging or memory storage are achieved by shining different colors of light on the chip.

The team said the new prototype aims to integrate electronic hardware and intelligence together to provide fast on-site decisions.

“Imagine a dash cam in a car that’s integrated with such neuro-inspired hardware – it can recognize lights, signs, objects and make instant decisions, without having to connect to the internet,” Walia said. “By bringing it all together into one chip, we can deliver unprecedented levels of efficiency and speed in autonomous and AI-driven decision-making.”

The chip can capture and automatically enhance images, classify numbers, and be trained to recognize patterns and images with an accuracy rate of over 90%. The researchers say it is also readily compatible with existing electronics and silicon technologies.

Tiny memristor
Researchers at the University of Texas at Austin, National Cheng Kung University, Oak Ridge National Laboratory, and Queen’s University Belfast created a tiny memristor memory device with a cross section area of a single square nanometer.

The team used molybdenum disulfide (MoS2) as the primary nanomaterial. Defects or holes in the material were key to high-density storage.

“When a single additional metal atom goes into that nanoscale hole and fills it, it confers some of its conductivity into the material, and this leads to a change or memory effect,” said Deji Akinwande, professor in the Department of Electrical and Computer Engineering at UT Austin.

Previously, the researchers developed a memristor device just an atom thick. This ‘atomristor’ was the basis for the new work focused on shrinking the cross section area. “The scientific holy grail for scaling is going down to a level where a single atom controls the memory function, and this is what we accomplished in the new study,” Akinwande said.

“The results obtained in this work pave the way for developing future generation applications that are of interest to the Department of Defense, such as ultra-dense storage, neuromorphic computing systems, radio-frequency communication systems and more,” said Pani Varanasi, program manager for the U.S. Army Research Office, which funded the research.

The team said the new memristor could provide a capacity of about 25 terabits per square centimeter and the technique used isn’t limited to MoS2 but could be applied to hundreds of related atomically thin materials.



Leave a Reply


(Note: This name will be displayed publicly)