« The pope excommunicates the mafia | Main | Physicist Offers $10,000 To Anyone Who Can Disprove Climate Change »

Should a driverless car let a passenger die in ordert to save pedestrians?

If we trust automated cars to actual drive--which is already the most dangerous thing most of us do in a week--we will also have to trust them to decide who lives and who dies in emergency situations..

Google and the Trolley Problem | Owen abroad

In 1967, the philosopher Philippa Foot posed what became known at “The Trolley Problem”. Suppose you are the driver of a runaway tram (or “trolley car”) and you can only steer from one narrow track on to another; five men are working on the track you are on, and there is one man on the other; anyone on the track that the tram enters is bound to be killed. Should you allow the tram to continue on its current track and plough into the five people, or do you deliberately steer the tram onto the other track, so leading to the certain death of the other man?

Being a utilitarian, I find the trolley problem straightforward. It seems obvious to me that the driver should switch tracks, saving five lives at the cost of one. But many people do not share that intuition: for them, the fact that switching tracks requires an action by the driver makes it more reprehensible than allowing five deaths to happen through inaction.

If it were a robot in the drivers’ cab, then Asimov’s Three Laws wouldn’t tell the robot what to do. Either way, humans will be harmed, whether by action (one man) or inaction (five men). So the First Law will inevitably be broken. What should the robot be programmed to do when it can’t obey the First Law?

This is no longer hypothetical: an equivalent situation could easily arise with a driverless car. Suppose a group of five children runs out into the road, and the car calculates that they can be avoided only by mounting the pavement, and killing a single pedestrian walking there. How should the car be programmed to respond?

There are many variants on the Trolley Problem (analysed by Judith Jarvis Thompson), most of which will have to be reflected in the cars’ algorithms one way or another. For example, suppose a car finds on rounding a corner that it must either drive into an obstacle, leading to the certain death of its single passenger (the car owner), or it must swerve, leading to the death of an unknown pedestrian. Many human drivers would instinctively plough into the pedestrian to save themselves. Should the car mimic the driver and put the interests of its owner first? Or should it always protect the interests of the stranger? Or should it decide who dies at random? (Would you a buy a car programmed to put the interests of strangers ahead of the passenger, other things being equal?)

One option is to let the market decide: I can buy a utilitarian car, while you might prefer the deontological model. Is it a matter of religious freedom to let people drive a car whose alogorithm reflects their ethical choices?

Perhaps the normal version of the car will be programmed with an algorithm that protects everyone equally and display advertisements to the passengers; while wealthy people will be able to buy the ‘premium’ version that protects its owner at the expense of other road users. (This is not very different to choosing to drive an SUV, which protects the people inside the car at the expense of the people outside it.)

...