Self-driving cars and morality  
 
 
 


Forty million people gave their views on line as to the choices to be made in the event of brake-failure by a self-driving car.  The responses to the research indicate a somewhat curious academic view of the whole concept of the people deciding on their own moral codes.  This article also calls into question any idea of trying actually to apply a moral code in such circumstances.


As we know, although religions require absolute obedience to the rules of their morality, for the non-religious amongst us one might imagine that a different approach could be taken. It seems that this is not necessarily so. The issues involved in this are nicely illustrated by the sort of decision it seems we shall have to make in the future, as reported in an article in the New Scientist on 27th October 2018. It is all about a revised version of the well-known trolley problem. We are asked how we should program a self-driving car to react in the event of a brake failure.

The New Scientist article gives the example of an autonomous car travelling along a road when its brakes fail. Should it carry straight on and hit a pregnant woman, a doctor and a criminal on a pedestrian crossing, or crash into a barrier, so avoiding the people on the crossing, but instead killing all the occupants of the self-driving car, a family of four?  This, the article tells us, is the kind of scenario included in the 'Moral Machine’ experiment, a survey on the internet of millions of people in 233 countries and territories worldwide, the results of which were published on 24th October in the respected journal Nature. Participants were asked to consider different scenarios in which those saved by the car’s decision might be, for example, obese or fit, young or old, pets or criminals or those with important jobs. In all, 40 million decisions in 10 languages were collected. So, an impressive gathering of data. Generally speaking, people preferred to save humans rather than animals, and young people rather than the elderly. Least favourite to be saved were dogs, followed by criminals and then cats. The results also demonstrated variations between different areas of the world, with a less marked preference in the East to save young people rather than the elderly. Decisions made to save humans in preference to dogs or cats were less common than the average in Central and South America and countries with a French influence, but women and fit people were preferred over others.

The researchers involved clearly thought that these results could form the basis for the decisions necessary to regulate the transport of the future. But the New Scientist article tells us that it's not that simple. Many researchers and ethicists have apparently said to the journalist writing the article that these results should not in fact be used to create policy or regulate the design of autonomous vehicles. "That would perpetuate cultural biases that might not reflect moral decisions. The fact that there are different cultural patterns should not surprise us, but it has nothing to do with whether something is right or wrong," Professor Peter Steeves, an ethicist from De Paul University in Chicago is quoted as saying. "The instinct to save the lives of women or children, for example, is rooted in the patriarchal view that these groups have less autonomy and are therefore more worthy of being saved.". Quite where the evidence for this assertion comes from I am not sure - we’ll let that pass - but how extraordinary to dismiss the views of the masses as to the validity of their moral choices!

Yes, they (we) are all biased in our view of the world, but that’s nothing new. All of our choices as to how we comport ourselves are ultimately based on our emotions, our desires, and so our biases. We are even very selective (biased) in our application of the golden rule (Do unto others etc.) - we seem to find it much easier to apply it to our close family and friends than people we don’t know so well, or at all. It is assumed though that its universal application would make the world a better place. Is the fact that we don’t do so down to our fallen nature? Or is it perhaps the fact that, despite living in large groups since prehistoric times, we haven’t seen there to be a net benefit to us from its application outside our immediate circle? We have not usually been willing to follow the example of the ants and sacrifice ourselves for the good of the collective. And after all, we have no real idea what would be the consequence in terms of human progress and, therefore, a potentially ‘better’ world, if we were all as nice as pie to each other. Such dismissal of peoples’ understanding of what is moral seems instead to confirm my impression that despite the diminution in the number of believers in the traditional religions in the West, there is still perceived to be a need for a set of rules that are somehow ‘right’ in an absolute sense. So then, it is not only religious people who seem to need an absolute morality imposed from above. Apparently, supposedly liberal thinkers have a similar need.

It is unsurprising that very many non-religious people still think that there are some fundamental, universal, principles which ought to form a part of our lives. It has been part of received wisdom for as long as we have had recorded history that what is moral isn’t a choice. That there is a similar view even amongst those most involved with ethics as academics is though quite surprising. I have the impression that there is a desire to find a 'natural law’ of morality, a bit like the natural laws of science that are fixed and invariable. How to find it is obviously not clear, but it seems that we must have, above all, world-wide uniformity, a uniformity that complies with the norms of political correctness and avoids the influence of patriarchy. That these norms change constantly with time and place, however, indicates that there isn’t any reason to believe that there is a universal or fixed morality, that natural law apparently being sought.

Surely it is not difficult to see that our morality is in fact a tacit agreement between members of different societies throughout the world and members of sub-sets of those societies as to what behaviour is acceptable at any given time and in any given circumstances? Our morality over the years has been a changing set of rules resulting from the circumstances in which we have found ourselves. As time has moved on and we have progressed towards a better standard of living, our moral codes have changed, we would say for the better, although I suspect that our descendants will think that our way of looking at things is actually quite primitive.

On the other hand, the fact that there are indeed similarities in the way we behave as societies indicates the probability of an evolutionary benefit from our behaviour that, for purely selfish reasons, we would be wise to take into account in deciding how to act. But as we have seen, acceptable behaviour varies according to the place in the world in which it takes place. So whilst we may not approve of how people in other places act, we have to accept that it is possible, at least, that the evolutionary benefit is not the same for the same action in the whole world. It can depend on the context, whether place or indeed time. So, in the case of self-driving cars, why not accept the democratic verdict of the people in each country of the world regarding the morality of programming the self-driving cars of that country? Why does an academic (male) ethicist in Chicago, for example, have the right to impose his morality on the rest of the world? Is not this an example of patriarchy?

Now, having said all of this, it is quite clear that the garnering of peoples’ opinions in this Moral Machine experiment was not exactly open to all. It would have been confined to those with access to the internet and those who happened both to spot the survey and decide to respond to it. So then, it is not really very good guidance as to the morality of a particular area. It is a view as to what the more computer-savvy people of the areas with internet access think to be ‘right’. Neither is it obvious to me that we need to or could realistically programme cars to try to make such decisions. Even if we wanted our autonomous vehicles to follow our local moral codes as revealed by the survey, how could it possibly work out exactly who was on the pedestrian crossing in order to make a calculation as to the relative merits of killing me and my family, or ploughing into the people crossing the road? Surely it must mean that my car would have to know all about not only me, but the pedestrians in the line of fire and the occupants of all the other cars with which it might choose to collide. It requires that we would all have to be instantly identifiable, whether as pregnant women, doctors, criminals, young or old. We would have to carry transponders or have chips inserted into us in order to reveal who we were and so what we were “worth” in moral terms. A society with no privacy, and all of our data held by our friend Mr Google?

No, to contemplate the sort of dystopia which would even begin to allow a self-driving car to make moral decisions on our behalves is not something I would wish to be a part of. Better perhaps to concentrate on having more reliable brakes – secondary systems which cut in when things go wrong to provide a fail-safe back-up. Now that’s something which doesn’t require a major moral debate or a complete loss of privacy. So then, a thank you to the researchers at the Moral Machine for enlightening us as to our attitudes, but I don’t think that we have any obvious application for the results of their research as yet.


Paul Buckingham

4th November 2018


 
 
Home      A Point of View     Philosophy     Who am I?      Links     Photos of Annecy