How Good Is 95% Accuracy?

Learning to deal with machines making mistakes.

popularity

Conventional, deterministic computers don’t make mistakes. They execute a predictable series of computations in response to any given input. The input might be mistaken. The logic behind the operations that are performed might be flawed. But the computer will always do exactly what it has been told to do. When unexpected results occur, they can be attributed to the programmer, the system manufacturer, or some other responsible human or corporate entity. 

“Intelligent” computers are different. Often, the result of a classification task is an estimate: a probability that an image belongs to a particular group, a value predicted with some degree of confidence. The best image and voice recognition algorithms can achieve accuracies above 90%, in some cases approaching the results achieved by humans, but they are not perfect. 

Moreover, computers make different kinds of mistakes than humans do. Your phone’s voice recognition system will never make spelling mistakes, but may choose words that make no sense in context. We’ve probably all experienced some form of auto-correct-driven embarrassment. There’s an important difference, though, between an amusing or frustrating messaging application and software used in security-sensitive or life-critical situations.

For example, more than two million people pass through Atlanta’s airport every week. If a 95% accurate facial recognition system compares those passengers against a collection of images of suspected terrorists, there will be more than a hundred thousand mistaken recognitions per week. (Probably more, as the image database itself is probably not completely reliable.) Because terrorists are rare, the overwhelming majority of those will be false positives, potentially subjecting completely innocent people to travel delays and intrusive searches. 

For life-critical applications like autonomous vehicles, the consequences of recognition errors can be even more severe. In one well-publicized case, a Tesla in “autopilot” mode failed to recognize a truck, ultimately killing the driver.

I haven’t driven an autonomous car, but my vehicle does have “smart” cruise control, which automatically slows or accelerates to maintain a safe following distance. It behaves well most of the time, but has an unnerving tendency to treat overhead signs and overpasses as obstacles. In one case, I was following a truck that slowed, then exited. The cruise control, however, appeared to treat the truck and a similarly-colored overpass as a single object. It braked hard to protect me from the “truck” that it thought was suddenly stationary in the middle of the road. Fortunately, there weren’t any cars behind me. 

Human drivers make mistakes too, of course. They swerve, they get lost, they turn abruptly and unpredictably. One of the arguments for autonomous vehicles is that, even if they aren’t perfect, they will still save lives by being better than humans. That’s probably true, but autonomous vehicles introduce new kinds of mistakes that the human drivers around them may not be prepared to recognize. As “intelligent” machines become more common, both citizens and policy makers will need to grapple with their failures as well as their successes. 



Leave a Reply


(Note: This name will be displayed publicly)