Self-Driving Cars And Kobayashi Maru

How should cars make moral decisions? It might not matter.

popularity

Kobayashi Maru. If you know what I am talking about, you are a bona fide Star Trek fan. If not, let me indulge.

Kobayashi Maru is a computer simulation for a training exercise in the fictional Star Trek universe, where Starfleet Academy cadets are presented with a no-win scenario. But they do have to make a decision.

The primary goal of the exercise is to rescue a disabled civilian vessel called Kobayashi Maru that drifted into the neutral zone separating the Klingons and the Federation. The approaching cadet crew must decide if they should cross into the neutral zone and rescue the Kobayashi Maru crew, thereby endangering themselves and their own ship. Not to mention starting an interstellar war of epic proportion. The other undesirable option: leave the Kobayashi Maru to certain destruction, killing all crew members. However, unbeknownst to the cadet crew, if they cross into the neutral zone and attempt such a rescue, the simulation is pre-programmed to guarantee that the cadets’ ship is destroyed with the loss of all crew members. This is really a no-win situation.

What does this have to do with us? This blog was supposed to be a piece on self-driving cars.

The topics of autonomous driving and artificial intelligence are all the rage in Silicon Valley and everywhere else on the planet. Every day, we read news stories and blog posts about self-driving cars with comments about whether we can trust machines (or computers) to do critical thinking and make life and death decisions! I guess this contrived question about life and death comes up in the context of neural network computing and AI having now entered the sphere of self-driving cars.

On one hand, most of us assumed that when we eventually get to Level 5 autonomous driving, there will be no human involvement and the car will decide what it wants to do in all situations. After all, we said that the human driver is the weak link when it comes to vehicular safety. If we can eliminate the human factor, we will all be safer and live longer. This is truly a hands-off paradigm.

So, what is this no-win scenario that self-driving cars will face?

The trolley problem. Yes, people have coined a term for this no-win hypothetical situation where a person witnessing a runaway trolley could either allow it to hit several people, or by pulling a lever, divert the trolley, but kill someone else. No, I did not make this up.

In a similar scenario, a self-driving car is moving down the road toward a crowded crosswalk, and the brakes somehow malfunction. Yes, this should not happen (but stay with me and allow me to indulge a bit). If the car swerves, the passengers may be injured or killed. If the car continues on the present course, it will hit and kill a number of pedestrians. This self-driving car must be pre-programmed to be capable of making this decision. Well, computers don’t program themselves. Engineers program computers. Or do they?  Will AI enable and allow the computer to make these decisions?

This is a moral question, with a moral and ethical decision “determined by the programmer” ahead of time.

Enter the “Moral Machine” from MIT.

It is a platform for gathering human perspective on moral decisions made by machine intelligence, such as self-driving cars. The simulation shows you a moral dilemma, not unlike the trolley problem or Kobayashi Maru, where a self-driving (or driverless) car must choose the lesser of two evils: either killing the passengers in the car, or killing five pedestrians. As an outside observer, you can decide which outcome is more acceptable to you, and you can see your response compared with those other people who have chosen to participate in this simulation (or data gathering). If you don’t like the scenarios presented to you, the MIT “Moral Machine” simulation even lets you create your own scenario. All this in the name of science.

Who says geeks cannot have fun?!

There is actually a brighter side to this story, especially if you are an engineer. This no-win situation should never happen, so we really do not need to worry about it. When I took my driver’s license test, I don’t remember this being one of the questions. So why should this question be posed to a driverless car? This is a double standard. If anything, self-driving cars could almost make sure this scenario never happens in the first place. With all the sensors and surround-view technology already present in cars you can buy today, the self-driving car will most certainly be aware of such scenarios long before a human driver can foresee them. The self-driving car will slow itself down or stop well in advance, eliminating the trolley problem. The enabling technology is already in widespread deployment today: adaptive cruise control, automatic emergency braking, lane departure monitoring and accident avoidance, blind spot detection—all enabled by sensor fusion.



Leave a Reply


(Note: This name will be displayed publicly)