As more things are connected, there’s more that can go wrong.
There are two distinct camps forming around autonomous vehicles. One group wants to see self-driving cars on the road as quickly as possible because it will save more lives than if people are behind the wheel. Others are wary, insisting there is no way will this can or should happen in the next 10 to 15 years.
Time will tell who’s right. But what is clear is that the technology has far outstripped the speed at which we can assure these devices will work in the real world in ways that everyone expects them to behave. Even ignoring the liability issues—which are a hot topic of conversation in the legal and insurance professions these days—the technology has raced well ahead of the tools needed to verify these systems will work. And standards groups are so far behind the pace of technology introductions that they may never catch up.
The upshot of all of this, like it or not, is that we all have become beta testers. Technology is leading the way, and we have no choice but to interact with it. Even people who don’t buy autonomous vehicles are beta testers because they will encounter autonomous vehicles on the road. And if they mess up somehow, then anyone injured along the way is part of a beta test gone awry.
This is all part of a subtle but rather profound shift, which has been underway for some time. With drivers behind the wheel, most of the accidents can be attributed to the driver. There are recalls where the carmakers and tier-one suppliers are blamed for airbag issues, cars that kick into gear, or exploding tires, but the vast majority of issues are still driver error.
This shift goes well beyond cars, too. Any device that can extrapolate beyond explicit programming is effectively in beta all the time. That makes them much more adaptable and capable, but it also makes them less predictable in one sense. So while they may be much safer at doing certain things than people, and far more consistent on a percentage basis, they also have the capability to reach beyond expected responses.
A security robot at Stanford Shopping Center in Palo Alto, Calif., ran over a 16-month-old child in July. There is a hefty list of surgical robots causing injuries, as well.
In the overwhelming percentage of cases, these devices do far more good than harm, and there is at least an electronic trail to figure out what goes wrong after the fact. With autonomous vehicles, the machines are responsible, and the chain of liability becomes much longer, deeper—chipmakers and software vendors could well be included in lawsuits and recalls in the future—and far more traceable.
However, it’s also much harder to predict these aberrations up front. The number of corner cases will explode as the number of devices reach beyond what they were explicitly programmed to do. And as more things are connected, both to the Internet and to other things, we are all increasingly surrounded by these devices. Some of them are in motion, others are fixed, but none of them is guaranteed to work in the way anyone expects.
That turns everyone into a beta tester. Even people who want nothing to do with the latest technologies are involved in one way or another. And as more of this technology gets connected, that involvement will only increase.
Leave a Reply