The Critical But Less Obvious Risks In AI

Just because a system starts out working correctly doesn’t mean it will continue to do so.

popularity

AI has been the subject of intense debate since it was first introduced back in the mid-1950s, but the real threat is a lot more mundane and potentially even more serious than the fear-inducing picture painted by its critics.

Replacing jobs with technology has been a controversial subject for more than a century. AI is a relative newcomer in that debate. While the term “artificial intelligence” was coined by John McCarthy at a summer conference at Dartmouth College in 1956, it has been more science fiction than reality for most of the years since then due to limitations in processing power, storage and network throughput. The idea of machines that truly “think” and take on all of the attributes of humans with none of the weaknesses is still a long way off, and there are questions about whether AI will ever really amount to more than a really good tool for experts in a particular field.

Nevertheless, there is plenty that AI can do. It can steer cars better and more consistently than people. It can make life-or-death decisions faster, and it can corral data from more sources than humans in far less time. AI systems don’t text, fall asleep at the wheel, or misjudge the distance to the car in front of them. On an assembly line, and they can work incessantly without the fatigue of repetition.

But technology ages in ways that aren’t always predictable. AI systems rely on hundreds or thousands of sensors, complex algorithms that are regularly updated, and training data that may not be completely accurate. Typically, all of that is calibrated when an AI system is developed and put online. Over time, however, there are sometimes indiscernible shifts in how the technology behaves. A sensor may not record everything. It may miss a few frames of a video stream or not pick up a certain frequency. So rather than fitting neatly into a distribution of acceptable behavior, AI systems begin to stray.

This isn’t a focus for most AI developers. Their chief concern is getting systems to work at high speed and within a set power budget. This is brand new technology, and as with all new technology the challenge is bringing these devices to market and growing the revenue stream. But these are complex systems, and they need to be updated throughout their lifetimes and monitored to prevent, for lack of a better term, technology drift.

Think about the layers of software on a personal computer. After years of patching, the software slows down to the point where users get frustrated and ultimately replace those devices. AI systems are even more complicated because they rely on input from the physical world to define a digital interpretation of that input. So there is a constant flood of data and always-on circuitry, which can bring latent defects to the surface faster than circuitry that is kept dark most of the time. That can be exacerbated by use in harsh environments, such as inside a car or a factory.

AI systems also are finely tuned machines. They are developed for a specific application. Algorithms are pruned to run as fast as possible with just enough accuracy, and these systems are architected to move data in and out at blazing speeds. But if they slowly begin to generate less-than-perfect results, the compounded effect of all of these minor errors can be significant.

AI is a very promising technology for many purposes, but it also poses real risks based upon technological changes that need to be understood and monitored throughout the lifetime of these systems of systems. This may sound like a mundane task, but the implications are much too large to ignore.

Related Stories
Where AI Data Gets Biased
Clean data is vital for training and inferencing, but it can be skewed in ways that are difficult to discern.
Dirty Data: Is The Sensor Malfunctioning?
Why sensor data needs to be cleaned, and why that has broad implications for every aspect of system design.
AI Knowledge Center
Data Analytics Knowledge Center



Leave a Reply


(Note: This name will be displayed publicly)