The Other Side Of AI System Reliability

Who wins and loses as more intelligence is added into devices.


Adding intelligence into pervasive electronics will have consequences, but not necessarily what most people expect.

Nearly everything electronic these days has some sort of “smart” functionality built in or added on. This can be as simple as a smoke alarm that alerts you when the batteries are running low, a home assistant that learns your schedule and dials the thermostat up or down, or a robotic vacuum cleaner that maps your home and schedules where it finds the most dirt.

To call this intelligence is a misnomer, of course. In some cases, it’s nothing more than bit mapping. Still, building pattern recognition into devices can greatly improve the user experience, and in many cases consumers are willing to pay for the added convenience. They don’t want refrigerators ordering their food, but they’re more than willing to pay not to have to do mundane tasks.

But how will all of these pieces interact over time, particularly as more devices are integrated into other devices? Today, there is no way to test or even fully predict these interactions. In the case of a vacuum cleaner, it probably doesn’t matter if it runs into and object or gets stuck under a chair. But if it’s a robot or a car, interference of any sort can cause injury.

The problem is made worse when that interference comes from inside a system — or worse still, inside another system connected to that system. The more intelligence added piecemeal into devices, the greater the possibility something will go wrong. And the faster these devices move and interact in any way, the greater the potential for trouble.

What’s needed are layers of rules that define how these devices should behave, and they need to be set by consortium of companies that span the entire supply chain, from design through manufacturing, test and inspection, all the way into the field. So rather than just one or two companies determining how those devices should interact with other devices they develop, this provides a way for the entire industry to get involved and for innovation to expand.

In the AI/ML world, this presents huge challenge. Many of these algorithms are only vaguely understood by the companies or data scientists who develop them. It’s hard enough just to get these devices working properly, which is evidenced by the rate of algorithm updates. There are almost no sets of defined rules about how these devices should interact, or what kind of security needs to be built in.

Much of this development work has been done in secret. Many of the algorithms developed by companies are highly proprietary. Tesla’s autonomous driving algorithm is very different from Volkswagen’s. And so far, they do little interaction. Their main goal is avoidance.

But they will have to interact at some point, with each other and with infrastructure, in order to improve safety and reliability. Standards will be required for simulating a given set of possible interactions, almost the way smart phones were developed for 60,000 to 80,000 possible user scenarios. In the case of AI/ML systems, the number of interactions could be orders of magnitude higher. That’s more corner cases than a single company can possibly develop by itself, but it’s certainly within the realm of possibility for an entire supply chain in a multi-trillion dollar industry.

For this market to grow beyond a handful of very rich companies, and to expand beyond what is at this point extremely crude interactions, communication and interaction will need to happen. And to make it all work as expected, these devices will have to be monitored and reliable, and the data they collect will need to be precise enough for a particular purpose.

This is an enormous undertaking, and one that will push many companies out of their comfort zone. But done correctly, it could increase the chip industry’s footprint on a mammoth scale.

Leave a Reply

(Note: This name will be displayed publicly)