Hardware-Based Methodology To Protect AI Accelerators


A technical paper titled “A Unified Hardware-based Threat Detector for AI Accelerators” was published by researchers at Nanyang Technological University and Tsinghua University.


“The proliferation of AI technology gives rise to a variety of security threats, which significantly compromise the confidentiality and integrity of AI models and applications. Existing software-based solutions mainly target one specific attack, and require the implementation into the models, rendering them less practical. We design UniGuard, a novel unified and non-intrusive detection methodology to safeguard FPGA-based AI accelerators. The core idea of UniGuard is to harness power side-channel information generated during model inference to spot any anomaly. We employ a Time-to-Digital Converter to capture power fluctuations and train a supervised machine learning model to identify various types of threats. Evaluations demonstrate that UniGuard can achieve 94.0% attack detection accuracy, with high generalization over unknown or adaptive attacks and robustness against varied configurations (e.g., sensor frequency and location).”

Find the technical paper here. Published November 2023 (preprint).

Yan, Xiaobei, Han Qiu, and Tianwei Zhang. “A Unified Hardware-based Threat Detector for AI Accelerators.” arXiv preprint arXiv:2311.16684 (2023).

Further Reading
Security Becoming Core Part Of Chip Design — Finally
Dealing with cyberthreats is becoming an integral part of chip and system design, and far more expensive and complex.
Security Becomes Much Bigger Issue For AI/ML Chips, Tools
Lack of standards, changing algorithms and architectures, and ill-defined metrics open the door for foul play.

Leave a Reply

(Note: This name will be displayed publicly)