Self-driving cars, accidents and the trolley problem

 
 
 
 

The trolley problem, a thought experiment, is famous for making us face up to difficult choices. What is proposed is that a heavy trolley is coming along a railway track, at speed, in the direction of a set of points. You can decide to leave things as they are and so just let the trolley carry on, in which case it will kill 6 people who are, by chance, tied to the line. Alternatively, you can switch the points so that the trolley goes down a side line instead. This choice would mean that there would be only one person killed, someone who had the misfortune to be tied to that other line. Most people say that they would send the trolley hurtling down the side line.

But there is a variation on this problem. Instead of changing the direction of the trolley in order to preserve the lives of 6 people at the cost of one, you can stop the trolley completely. All you have to do is push a fat guy, who happens to be standing next to you on a bridge, over the parapet and onto the line beneath. His overweight body will stop the trolley in its tracks, but of course kill him. What to do? The prospect of actually pushing someone onto the track, instead of switching the points – a more remote, less personal action – dissuades the majority of the subjects of the experiment from deciding to cause his death.

It is said that this decision is because of the personal involvement inherent in the action of pushing someone. In my opinion, however it is because we know that the fat guy would resist the attempt to kill him and, so, it is just as likely that I would finish up on the track in place of him. And there’s another problem: it is not mentioned by the researchers, but to push someone over the parapet in such circumstances would certainly be an act of murder, whereas to change the points would be a more arguable case. For the subject of the experiment, who would be uncertain about all this, would that not be a source of confusion? And how could we be sure that the weight of the poor chap would be sufficient to stop the trolley? Such certainty is improbable in the extreme.

Which all indicates a general problem with such thought experiments. Often they are by no means realistic scenarios. Thus the ability of the subject to give a response which signifies something real with regard to his attitudes must be in question. He
is asked to make a moral decision regarding a situation that he probably does not accept could exist in real life. This signifies in turn that his emotions will not be really engaged. And we do not make moral judgements without a very significant emotional contribution. Hence the difficulty for sociopaths in making moral decisions which we would recognise as such. I am not convinced, therefore, that the results of any of these experiments are reliable.

But the problem of difficult choices comes up in respect of self-driving cars and so, potentially, in the real world. If an accident were about to happen and, in consequence, someone or other would inevitably be killed, what programming ought to be incorporated in the car’s computer to decide which one of them it should be? Professor A C Grayling (a philosopher) says in an interview in the March 2016 edition of Prospect that he thinks that the reply is simple. I’m not so sure. This is my letter to Prospect, which also sets out Grayling’s view:


Dear Sir

It seems to me that A C Grayling is wrong in sweeping aside the difficulty of robotic decision making by driverless cars.  He says in connection with the trolley problem: “I think...the calculation can be a very straightforward, simple utilitarian one: [the car] will always kill the one rather than the six... It will be programmed to make those sorts of choices. You see, the trolley problem is really only a problem for human beings...the one person might be your mum.”

What though happens if the alternative to killing 6 people who’ve decided, knowing how the car is programmed, to run across the road with no warning, is for the car 'deliberately' to veer off the road and kill someone innocently walking along the pavement instead? It rather reflects on the value of Professor Grayling's version of utilitarianism if it justifies such a scenario. So sure, it’s only a problem for human beings – but we’re the moral entities, not our cars.

In fact, the idea of utilitarianism as the ultimate decider of what we ‘ought’ to do is not very useful. It is one of those Humpty Dumpty words which can be used to justify whatever we want as our preferred outcome. I think though that morality is the generally accepted opinion of how we ought to act in particular circumstances. Over the millennia that opinion has changed greatly for aspects of our behaviour great and small. I doubt though that many people would currently be happy with Professor Grayling’s suggestion that cars should be programmed on our behalves to be such blunt instruments.

Paul Buckingham


March 2016

 
 
Home      A Point of View     Philosophy     Who am I?      Links     Photos of Annecy