Home
TECHNICAL PAPERS

Training a ML model On An Intelligent Edge Device Using Less Than 256KB Memory

popularity

A new technical paper titled “On-Device Training Under 256KB Memory” was published by researchers at MIT and MIT-IBM Watson AI Lab.

“Our study enables IoT devices to not only perform inference but also continuously update the AI models to newly collected data, paving the way for lifelong on-device learning. The low resource utilization makes deep learning more accessible and can have a broader reach, especially for low-power edge devices,” said senior author Song Han in this MIT news article.

Abstract:
“On-device training enables the model to adapt to new data collected from the sensors by fine-tuning a pre-trained model. However, the training memory consumption is prohibitive for IoT devices that have tiny memory resources. We propose an algorithm-system co-design framework to make on-device training possible with only 256KB of memory. On-device training faces two unique challenges: (1) the quantized graphs of neural networks are hard to optimize due to mixed bit-precision and the lack of normalization; (2) the limited hardware resource (memory and computation) does not allow full backward computation. To cope with the optimization difficulty, we propose Quantization-Aware Scaling to calibrate the gradient scales and stabilize quantized training. To reduce the memory footprint, we propose Sparse Update to skip the gradient computation of less important layers and sub-tensors. The algorithm innovation is implemented by a lightweight training system, Tiny Training Engine, which prunes the backward computation graph to support sparse updates and offloads the runtime auto-differentiation to compile time. Our framework is the first practical solution for on-device transfer learning of visual recognition on tiny IoT devices (e.g., a microcontroller with only 256KB SRAM), using less than 1/100 of the memory of existing frameworks while matching the accuracy of cloud training+edge deployment for the tinyML application VWW. Our study enables IoT devices to not only perform inference but also continuously adapt to new data for on-device lifelong learning.”

Find the technical paper here. Published July 2022.

National Science Foundation, the MIT-IBM Watson AI Lab, the MIT AI Hardware Program, Amazon, Intel, Qualcomm, Ford Motor Company, and Google funded this research.

Authors: Ji Lin, Ligeng Zhu, Wei-Ming Chen, Wei-Chen Wang, Chuang Gan, Song Han. arXiv:2206.15472v2

Related Reading:
Using AI To Speed Up Edge Computing
Optimizing a system’s behavior can improve PPA and extend its useful lifetime.
Why TinyML Is Such A Big Deal
Surprisingly, not everything requires lots of compute power to make important decisions.
Edge Computing Knowledge Center



Leave a Reply


(Note: This name will be displayed publicly)