Bolstering Security For AI Applications

Why AI accelerators need a programmable hardware root of trust.

popularity

Hardware accelerators that run sophisticated artificial intelligence (AI) and machine learning (ML) algorithms have become increasingly prevalent in data centers and endpoint devices. As such, protecting sensitive and lucrative data running on AI hardware from a range of threats is now a priority for many companies. Indeed, a determined attacker can either manipulate or steal training data, inference models and classification results.

The Security Building Blocks of AI
From our perspective, companies can protect sensitive data running and residing on AI hardware by following the fundamental security ‘building blocks’ (principles) detailed below:

  • Confidentiality: Ensure confidentiality with secret keys to encrypt sensitive data.
  • Integrity: Protect data integrity (training data, inference models and firmware) with hash functions that provide a unique fingerprint. Hash values should be signed with a private key to prevent alteration.
  • Availability: Maintain availability by employing a secure boot process and validating firmware integrity. AI hardware should also be capable of detecting anomalies and thwarting attacks.
  • Authentication: Confirm authenticity of devices (servers, accelerators and edge devices), provision keys and identities to devices. In addition, keys should be used to securely identify and authenticate components.

Hardware Requirements for AI Security
To meet the challenges of an evolving threat landscape, AI hardware should be capable of supporting the above-mentioned security building blocks with the hardware features detailed in the table below. These include:

To ensure an effective and comprehensive approach to security, these disparate features can be unified in a programmable hardware root of trust (HRT) and embedded in AI chipsets running in the cloud (servers) or at the edge. Let’s take a closer look at how HRTs can help protect AI hardware.

Maintaining AI/ML Accelerator Integrity
First and foremost, the AI/ML accelerator should be protected from tampering. Indeed, the accelerator should remain in a secure state, with robust security capabilities preventing attackers from disrupting the boot flow and loading malicious firmware. Moreover, attackers must be prevented from hijacking the firmware update process to load malicious firmware and modify code (in memory) to introduce unauthorized functionality. In addition, the accelerator should be protected against fault injection attacks and the exploitation of test or debug logic.

To ensure the integrity of an AI/ML accelerator, an HRT should have robust secure boot functionality. Once the HRT boots securely, it can confirm that all other CPUs in the system have done the same. The HRT can also secure firmware updates and protect the system from tampering by monitoring updates and providing rollback options. In addition, a HRT can monitor system status and memory content, as well as detect tampering activity and side channel/fault injection attack attempts.

Protecting Inference Models
A skilled attacker can modify or replace an inference model, leading to potentially dangerous scenarios across multiple applications. For example, modifying or replacing an inference model running on an autonomous advanced driver-assistance system (ADAS) could cause serious collisions and accidents. An inference model can also be targeted by an attacker attempting to clone or reverse engineer an AI system. To prevent unauthorized modification or replacement, inference models should be signed with a designated private key and authenticated with an HRT. Moreover, an HRT should be used to encrypt inference models while at rest.

Ensuring Input Data Integrity
A determined attacker can tamper with or manipulate data to deliberately cause misclassifications. To prevent this from happening, an HRT should be deployed in both the endpoint device and the cloud (servers). This enables secure communication between the endpoint and cloud – effectively protecting input data integrity.

Conclusion
Hardware accelerators that run sophisticated AI and ML algorithms are increasingly deployed in both data centers and endpoint devices. Unsurprisingly, protecting sensitive and lucrative data running on AI hardware from a wide range of threats has become a priority for many companies. From our perspective, AI accelerators should include a programmable hardware root of trust (HRT) to thwart the manipulation and theft of training data, inference models and classification results.



Leave a Reply


(Note: This name will be displayed publicly)