A documentary regarding Artificial Intelligence on the TV the other night reminded me that, many years ago, we were invited to a party at a friend’s house in Birmingham. One of the other guests was a researcher in Artificial Intelligence at Birmingham University. He explained to me that they were at an impasse. AI would in principle be of great benefit in the future, but it was difficult to see how to progress. I asked him if at that moment, AI was a system capable of learning. He replied: “No”. In fact, it was necessarily the case with the technology then available and the approach taken up until that point. To illustrate the situation, we need only look at the triumph of that era – the ability for a computer to play chess to a high level. In fact this ability was based on a set of rules which had been developed by Alan Turing many years previously. Obviously, the number of moves possible to start the game is limited. If though you try to explore all the possibilities for moves after that, it soon becomes a completely unmanageable number of permutations. Experienced players use their experience to know instinctively which possibilities to explore and which to ignore. For this reason, Alan Turing had proposed 10 ‘heuristics’ (rules of thumb) to give the computer a series of guidelines – e.g. if you can capture an opponent's piece without putting yourself in peril, then go ahead and do it. In the light of experience, the researchers added other heuristics and, with the increase in computing power available, a computer finally achieved the status of Grand Master. But it was only a traditional computer and so without the flexibility of a brain. In fact it wasn’t the computer which had learned from experience, but the researchers.
Obviously, now, we have emerged from the impasse, because scientists have taken the next step - the neural network. They have simulated our brain’s neural structure in order to allow a computer to learn from first principles how something functions or the essence of a collection of things. From the information furnished, networks are capable of deriving common factors, just as we and our brains do. They can then apply this knowledge to situations which were not included in the original examples. For example, given thousands of photos of lots of different varieties of dogs, and cats, all labelled correctly, the network can then distinguish dogs from cats in other unlabelled photos with a very high success rate. We have seen recently though that they can be used for many other far more useful things. Now, they can identify cancer cells, or identify the changes at cellular level which will result in blindness if not diagnosed very early. Often, it is not obvious how the network has arrived at its conclusion. Thus, these networks give the impression of an actual intelligence, rather than the traditional computer which we know to be incapable of freeing itself from the bounds of its prescriptive software. Although we are only at the beginning of this new approach, we are even now seeing notable results.
But as the network concept is based on the human brain and its network of neurons, I suppose that there is the possibility that, one day, there will be an artificial intelligence capable not only of identifying what the dogs of the world have in common, but also our mental processes in the widest sense. We have already received a warning about what this may involve. “2001, A Space Odyssey”, by Arthur C Clarke, is the best known prophesy. We know that HAL 9000, the computer, was unable to reconcile its duty to present accurate information to its questioners and, at the same time, its duty to assure the success of the mission by keeping a secret from the spacemen in its care. Now, we have Robocop, come from the future to assassinate someone in order to alter that future. It’s not obvious, however, that multiplying up the number of ‘neurons’ in an artificial network, to the equivalent of the number of the synapses in our brain, would produce something which could function like a human brain. As I have pointed out in other essays, the ability to think rationally in itself gives us nothing. Mr Spock, a figure supposedly without emotion, would not be capable of deciding to act – he would have no aim in his life. Having an objective in life is what motivates us to action. To give us the range of objectives which we have, there are our emotions, our impulses. And these have been produced and moulded, when you think about it, by at least 4 million years of evolution.
So then what are the motivations which we ought to give to our super-intelligent neural networks? Ought they to have the entire range of human emotions? The proponents of the alternative to natural selection, ‘Intelligent Design’, have always had a considerable difficulty. It seems that the designer was not, after all, very intelligent - he has provided us with a psychology full of problems. One of these is that we do not perceive truth easily. We see the world around us through the filter of our previous education, beliefs and experiences. Should we give the intelligent computer our same distorted view of the world? What ought we to do with the very strong emotion of hatred? Presumably for someone under attack it would be an advantage to hate the aggressor. It would produce a more violent counter-attack. But how to translate something useful in tribal society for use by a computer? And then there is sadness. It’s useful for us, in the sense that it encourages us to avoid similar circumstances in the future. It’s a bit difficult, however, to see how it could be useful for a computer. Seeing that sadness affects us mainly after a loss, it wouldn’t be very useful, unless there was actually a social environment for HAL to take part in. Of course, it is our social environment which probably explains the vast majority of our emotions and so our complex and somewhat unreliable psychology. But such an environment allows us to achieve together what we cannot do alone. It is a work in progress, but gives us a greater possibility of feeding ourselves, defending ourselves against the perils which surround us and, in view of the benefit provided by sexual reproduction for evolutionary change, of passing on our genes to a new generation. A computer has no need to find the means to feed itself, as long as it has a sufficiently stable electricity supply, defend itself, or procreate (in any way) unless we decide to create beings which are entirely independent of us and therefore our likely competitors in a dystopian future. And, of course, emotions presume self-consciousness. We cannot explain how this functions in ourselves, or be certain of its existence in other animals. It would be a bit presumptuous, therefore, to imagine that we are ready and able to give it to our creation.
We see lots of progress towards computers with super-human abilities in limited fields, which can recognise key factors in data which we can easily miss. We are though very far from a computer which can rival a human being for our sort of wide-ranging intelligence. Every human absorbs a vast quantity of information as a baby and then as a child (and even, sometimes, when an adult) which in turn permits us to exercise this generalised form of intelligence. To give a massive artificial network the same information, to allow it to understand the world from first principles, would be no small thing. Even the choice of such information would be subject to the prejudices of the selector. Historians differ significantly in their interpretation of our past. Which view should be provided? Perhaps all of them? What scientific theories should be fed into the network? All of them, even the ones which don’t seem to make sense? Giving it information with an interpretation would be to impose even more of our prejudices upon it and so not benefit from what could be a new understanding of the world. Should it have access to all of the data from which conclusions have been draw so that it can re-check the their validity? To go even further, however, and give it our emotions, or the subset of them which we thought appropriate to an intelligence which we didn’t really understand would be the height of folly. When, if ever, we truly understand ourselves, perhaps then will be the moment to start giving emotions to computers - HAL can wait.
14 September 2018