From edge to data center, AI devices require specific security features to protect valuable training sets and data from attackers.
A number of critical security vulnerabilities affecting high-performance CPUs identified in recent years have rocked the semiconductor industry. These high-profile vulnerabilities inadvertently allowed malicious programs to access sensitive data such as passwords, secret keys and other secure assets.
The real-world risks of silicon complexity
The above-mentioned vulnerabilities are primarily the result of increased silicon complexity. This is because security flaws frequently occur when multiple components interact in unexpected ways. As silicon complexity increases, the number of possible interactions increases exponentially, along with the number of potential security vulnerabilities. Like high-performance general-purpose CPUs, artificial intelligence (AI) and machine learning (ML) accelerators are inherently complex and require specific security features to protect valuable training sets and data from attackers.
AI/ML threat vectors: From the edge to the data center
Within the realm of threats, an attacker could physically disassemble edge devices with AI/ML inference accelerators deployed in the field. If unprotected, an attacker could run malicious firmware, access and alter data, intercept network traffic and employ various side-channel techniques to extract secret keys and other sensitive information. A remote attacker could target servers with AI/ML accelerators running training and inference applications in the data center. As an example, a remote attacker could subvert the host CPU hypervisor and access any process or memory region. Additional server attack vectors include reading the flash memory in both the host and the accelerator, as well as the contents of the SSD. Further, an attacker could run malicious software on the host CPU and accelerator CPU and read the contents of SRAM and DRAM (both on the host and accelerator CPUs). Lastly, even network and bus traffic could be monitored and altered via unprotected AI/ML accelerators.
It is also important to note that both AI/ML inference models, as well as input data and results, are increasingly valuable and must be protected from criminal elements intent on financial gain. Indeed, this data can be stolen to design cloned or competitive devices. As well, the integrity of AI/ML systems must be protected from tampering to prevent malicious attackers from altering training models, input data and results. Tampering could be catastrophic for certain applications such as autonomous vehicles on a highway, with manipulated or false data causing accidents, injury and potentially loss of life. Another example could see the alteration of facial recognition data enable an attacker to physically breach sophisticated security systems protecting sensitive facilities. Spoofed data could also help an attacker trick an airport baggage scanner system into ignoring specific contraband material.
Protecting AI/ML systems with a programmable security co-processor
To protect both AI/ML silicon and data, accelerators should be built on a secure, tamper-proof foundation that ensures confidentiality, integrity, authentication and availability (up-time). This can be achieved with a programmable secure co-processor that is purpose-built to provide a wide range of comprehensive security functions. These include hardware-based encryption, hashing and signing, key management, provisioning, authenticating, as well as proactive monitoring to detect anomalous activity.
A secure co-processor can protect AI/ML silicon and applications from malicious attacks in a number of ways:
Conclusion
AI/ML systems will have a growing importance across every industry. The inherent complexity of AI/ML accelerators require specific security features to protect valuable training sets and data from attackers. This is true for both edge devices with embedded AI/ML inference accelerators, as well as servers with AI/ML accelerator cards used for both training and inference. As such, AI/ML accelerators should be built on a secure, tamper-proof hardware foundation that ensures confidentiality, integrity, authentication and availability.
Leave a Reply