Artificial, With Questionable Intelligence

Debate begins to stir up about choices machines make.

popularity

A common theme is emerging in the race to develop big machines that can navigate through a world filled with people, animals, and other assorted objects—if an accident is inevitable, what options are available to machines and how should they decide?
 
This question was raised at a number of semiconductor industry conferences over the past few weeks, which is interesting because this idea has been kicked around for at least a couple years. But in the past, it was merely theory. Now, as autonomous vehicles begin to roll onto city streets and highways and real accidents occur, this issue has seen a sustained resurgence.

On a business level, the rollout of autonomous vehicles has been generating an increasing number of questions about who is financially responsible for these accidents. AI has fascinating possibilities, and machine learning/deep learning are valuable tools, but these technologies are still deep in the research phase even though they are being applied to real products today. Changes to algorithms are being made almost daily, which is why most of the chips being used for these devices are either inexpensive arrays of GPUs for the training, or some type of programmable logic for the inferencing.

AI/ML/DL collectively represent a potential bonanza for the tech world in general, and the semiconductor industry in particular. But this is an opportunity that also has sharp teeth. There is a very real possibility that the chain of liability could extend far deeper into the supply chain than in the past. The chip industry has been remarkably successful in avoiding liability issues so far, in large part because most designs have not been aimed at safety-critical markets.

As chipmakers, tools vendors and IP developers race into the automotive world, they are being forced to deal with very different kinds of issues. For example, if machines can perform tasks better than people, is that considered good enough? If the number of roadway fatalities decrease with the introduction of autonomous vehicles, is there an acceptable accident rate?

Accidents will continue to occur, with or without autonomous devices, even with more nines added to right of the decimal. The odds of failure increase as the number of components and the amount of software content continue to rise and the supply chain continues to expand. This problem extends well beyond just cars, too. It includes drones, robots of all sorts, and even seemingly benign devices such as household appliances. As connectivity increases, so does the potential impact of failure of any sort, whether it’s caused by equipment failure, unexpected interactions and corner cases, or malicious code.

Quality, coupled with best practices, has been a solid defense against liability claims in the past. But two things are changing. First, technology increasingly is crossing into the world of functional safety, a trend that is only going to increase. And second, machines that react to stimuli on their own likely will develop unique behaviors over time, so defining quality and best practices becomes significantly more difficult.

The chip industry is just beginning to grapple with these issues, which is why discussions about rules, standards and ethics are surfacing at semiconductor conferences—and in conference rooms of semiconductor companies around the world. The scenario cited most frequently (in one iteration or another) is, if an accident is inevitable, does an autonomous device hit a person or risk injuring the occupants of the vehicle by veering off the road.

That leads to several important questions. First, who will make that determination? Second, on what basis will that determination be made? And third, will that determination still apply if the behavior of a machine isn’t entirely predictable?

This all adds up to a lot of questions, and not so many answers.



1 comments

realjj says:

The vehicle has a responsibility towards the passengers, followed by the actors that are not breaking the law/rules. The ones not playing by the rules are at highest risk.
The severity of the injuries must be factored in too. Death, permanent damage, severe injury and so on. A passenger can break a leg to save 10 children sitting in the middle of the road but can’t be killed to save those 10 children. The innocents (passenger and other actors) must be prioritized over someone that takes the risk to not follow the rules.

In practice it is easier, unless someone jumps in front of the car and there is no time to react at all. Speeds must be appropriate for each environment and low latency plus some work on braking distance, should allow for the impact speed to be survivable. Best cars can come to a full stop from 26.8m/s (60mph) in about 31m and that could be improved greatly at a cost (wider tires, more downforce – maybe active aero for downforce ). But you are not gonna travel that fast on a narrow 1 lane city street with pedestrians on each side. On such a street maybe 30km per hour (8.33m/2) or less and then, just a guess without doing the math, maybe the car can stop in 5-7m + reaction time. Ofc avoiding death and severe injuries doesn’t require an impact speed of zero. That’s in practice and without factoring in the ability to steer or accelerate.

The car can also exercise caution and avoid ending up in a risky situation that might require difficult choices. Assuming that It does need to think ahead and it can’t be just reactive to be “good enough” for the real world.

Maybe to put it in a simpler way, if the car gets into such a situation, it already did something wrong – inappropriate speed, a failure to anticipate. To me, these examples seem a bit hysterical and out of context.

Do wonder if investing to reduce braking distance is an absolute must with autonomy. Safety will be crucial and electricity gets cheaper and cheaper with renewables. Losing some efficiency for a gain here might just be worth it.

Leave a Reply


(Note: This name will be displayed publicly)