A new technical paper titled “SystemC Model of Power Side-Channel Attacks Against AI Accelerators: Superstition or not?” was published by researchers at Germany’s University of Lubeck.
Abstract
“As training artificial intelligence (AI) models is a lengthy and hence costly process, leakage of such a model’s internal parameters is highly undesirable. In the case of AI accelerators, side-channel information leakage opens up the threat scenario of extracting the internal secrets of pre-trained models. Therefore, sufficiently elaborate methods for design verification as well as fault and security evaluation at the electronic system level are in demand. In this paper, we propose estimating information leakage from the early design steps of AI accelerators to aid in a more robust architectural design. We first introduce the threat scenario before diving into SystemC as a standard method for early design evaluation and how this can be applied to threat modeling. We present two successful side-channel attack methods executed via SystemC-based power modeling: correlation power analysis and template attack, both leading to total information leakage. The presented models are verified against an industry-standard netlist-level power estimation to prove general feasibility and determine accuracy. Consequently, we explore the impact of additive noise in our simulation to establish indicators for early threat evaluation. The presented approach is again validated via a model-vs-netlist comparison, showing high accuracy of the achieved results. This work hence is a solid step towards fast attack deployment and, subsequently, the design of attack-resilient AI accelerators.”
Find the technical paper here. Published November 2023.
Nešković, Andrija, Saleh Mulhem, Alexander Treff, Rainer Buchty, Thomas Eisenbarth, and Mladen Berekovic. “SystemC Model of Power Side-Channel Attacks Against AI Accelerators: Superstition or not?.” arXiv preprint arXiv:2311.13387 (2023).
Leave a Reply