Securing AI Silicon

Where and how it can be secured, and how that will change in the post-quantum world.

popularity

The importance of security in AI training/inference silicon is increasing in awareness. Over the past several months, I’ve noticed many questions in common from various parts of the microelectronic industry. In this blog post, I’ll share my thoughts on some of these most frequently asked questions.

Firstly, people often ask if securing AI silicon is different from securing other types of microelectronics. And while I don’t think the security challenges to protecting AI silicon are unique, I do think that AI silicon has an expanded threat surface compared to other systems. That being said, most of the challenges of protecting AI are very much like other systems, and this can be helpful since we are all more familiar with legacy systems than newer AI systems.

The most effective example of an instructional legacy system would be FPGA systems. In FPGAs, there is a configurable piece of firmware called a “bitfile” that is programmed into the FPGA; this causes the programmable logic inside the FPGA to perform very specific circuit operations that the designers have added to the FPGA. To a large degree, this file can represent the entire value of all the years a company has invested in engineering, testing, and prototyping. If you’re an adversary or a competitor of the company that did all the work, it’s of great interest to you to see exactly what is in that bitfile.

This analogy applies directly to AI. In AI systems, there’s an inference model produced by a training system, and that inference model is then loaded into an AI chip, and that AI chip then executes that inference model. Similar to FPGAs, these inference models contain years of value to companies who created the training system and associated training data. So similar to an FPGA, it’s of great interest to an adversary or competitor to obtain the plaintext of an inference model.

The reason for this interest is the “expanded threat surface” mentioned above. Specifically, inference models are – as it turns out – surprisingly generic. Generic in the sense that they generally include the activation weights for various artificial neural net (“ANN”) elements in the design. If your adversary were able to obtain your entire inference model, they could use those weights to program a generic ANN, and thereby obtain their own functioning model based on everything that the AI model was trained against. In other words, if you’re an adversary or a competitor that wants to see what the “secret sauce” of a particular company is, you can go after those AI models. So, whether an AI model is sitting in memory (data at rest), being pulled into a chip (data in use), or being used in an ANN calculation, the types of attack vectors that someone could use to gain access to activation weights are essentially the same as for an FPGA system.

Given the enormous value that is created by AI, another common question is how and where it can be secured. The answer here is that security needs to be at the hardware level, and it is up to the chip manufacturers to implement a secure solution.  Again, this is something that has been common practice in the FPGA space for the past few decades, and it’s now a very important requirement for emerging AI silicon.

Not only does the silicon have to be able to protect the AI model as it’s sitting in memory, but it must also ensure the privacy and authenticity of that data. You need to protect against any adversarial modifications that can cause malfunctions to the chip, create side-channel leakage of embedded security keys, or allow malware to be inserted in the chip that allows subsequent access to an authentic model.

Once privacy and authenticity concerns are taken care of, there is also performance to think of. Edge AI systems that are going into sensor fusion applications in a car for example, can’t wait 10 seconds for firmware to be securely decrypted, authenticated, and then loaded into the inference model. Security performance, data privacy, and data authenticity are all key for securing AI and things that chip vendors themselves need to implement.

Finally, the advances and growth we see in AI systems are happening as the world is also facing the emerging threat of quantum computers. All public key cryptography is under threat by so-called “cryptographically relevant quantum computers” (CRQCs), that is, quantum computers powerful enough to break public key encryption. What does mean for AI specifically? When an AI model is data at rest, the privacy that needs to be able to secure that data privacy must be enforced with 256-bit key strength on the symmetric encryption because 128-bit key strength is no longer going to be considered secure once CRQCs exist in the world.

Similarly, the actual authentication mechanisms are going to have to rely on either emerging stateful/stateless hash technology and/or new algorithms for key signature. In the future post-quantum world, more traditional RSA and elliptic curve technologies are expected to fall over in the presence of CRQCs. This is something to factor in when systems that are being built today are expected to remain secure in the field for 10, 15 years or even longer. These systems must ensure that the protocols securing data privacy and authenticity are implemented with quantum secure mechanisms.

Rambus offers a broad portfolio of security IP that enables hardware-based security for AI silicon, as well as Root of Trust IP for data at rest protection, Inline Memory Encryption IP for data in use protection, and Quantum Safe Cryptography solutions to protect devices and data in the quantum era.



Leave a Reply


(Note: This name will be displayed publicly)