Algorithms And Security

AI is a moving target, and that’s unlikely to change.


From a security standpoint, the best thing AI has going for it is that it’s in a state of perpetual change. That also may be the worst thing. The problem, at least for now, is that no one knows for sure.

What’s clear is that security is not a primary concern when it comes to designing and building AI systems. In many cases it’s not even an action item because architectures are constantly being modified to accommodate changes in algorithms. Unlike in the past, when product cycles would last years, and performance and power improvements would be made to systems over time with new manufacturing processes, new materials and different memories, AI developers are racing to cram the equivalent years of improvements and learnings into an ever-shorter cycle.

But building security into these devices requires a stable platform, and at this point those don’t exist. Training algorithms are being written, pruned and adjusted, and chip designs are being constantly tweaked to eliminate bottlenecks in moving data through these devices. All of that is great for AI systems on one level, but the impact on security could be significant.

The challenge is building in enough security that these chips can still behave in predictable ways, and from an AI perspective within acceptable distributions, without impacting performance. In effect, security in AI needs to have extremely limited overhead because the whole purpose of these devices is blazing processing speed with very low power. In many cases, these devices will be connected to a battery, and their target application is to accelerate multiply accumulate operations, which is highly compute-intensive.

So how do these worlds fit together? There are several possible approaches. First, data traffic needs to be constantly monitored from inside of these devices, and a detailed baseline of activity needs to be developed and standardized, at least for users of a particular architecture. That can happen with temperature sensors, which can detect minor fluctuations in traffic, and it can be monitored from the movement of data through a chip. If there is unusual activity, those operations can either be shut down, and/or a determination made for what caused a spike in traffic.

The great thing about this approach is that it requires very little overhead in terms of performance, and it is effective in finding problems that may not show up if circuits are not being utilized. But whether it will catch all aberrant traffic, or whether that traffic can be blended with other legitimate traffic, isn’t clear.

A second approach is to physically track the supply chain for the algorithms and the various hardware accelerators in those devices. Supply chain management is well understood on the hardware side for components, but it has never been applied for algorithms and the people who work on critical pieces of those algorithms. The demand for data scientists and programmers is so high at the moment that many companies will hire anyone with the proper skill set. That human supply chain needs to be traced and tracked as closely as the physical components.

Related Stories
Security Holes In Machine Learning And AI
A primary goal of machine learning is to use machines to train other machines. But what happens if there’s malware or other flaws in the training data?
Creating A Roadmap For Hardware Security
Government and private organizations developing blueprints for semiconductor industry as threat level rises.
Holes In AI Security
Why training data is so susceptible to hacking.
Meltdown, Spectre And Foreshadow
Why security must be addressed at the architectural rather than the micro-architectural level.
Speeding Up AI
Achronix’s CEO explains where the main bottlenecks are and how to solve them.

Leave a Reply

(Note: This name will be displayed publicly)