Artificial Intelligence  - une folie de grandeur?

It may not be obvious that there is a direct link between the greenhouse effect and artificial intelligence. Last week, however, in ‘Le Monde’ there was an analysis concerning the exponentially increasing amount of power being used to run the world’s computers. This has been talked about in the past in connection with crypto-mining. Recently it was estimated that such mining used the same amount of energy as Norway.

But the article makes clear that crypto-mining is only a subset of total computer power demand, with what that means for the production of greenhouse gases. The promise has always been that the use of computers would diminish the energy used overall: it would provide efficiencies which would more than outweigh the power used by the computers themselves. Le Monde’s reporters point to research showing this to be untrue. And so, as part of our attempts to limit global warming, we need to consider just as carefully how we use our computers as what sort of transport we should use.

Artificial intelligence is, by its nature, a very big user of computer power. Indeed it’s only now that we’re actually in a position to run computer systems big enough (and power-hungry enough) to do the vast number of computations required for AI. Which means, in turn, that if we want to progress down that route, then we shall have to be very sure that we are not thereby causing ourselves even greater problems.

A webinar organised by Warwick University last week asked the question “AI – who will win the race?” It did not though take us very far towards an answer. I’m not sure that we were even given guidance as to where the race was taking place or who the competitors were and there was certainly no mention of global warming. ChatGPT however came up despite the reluctance of one of the talking heads to mention it.

Actually, I can see why there was such reluctance. As we have all become aware, it’s a Large Language Model (LLM). It takes the string of words it's provided with as the prompt and then from its immense database decides, on a statistical basis, which word would normally come next and then which word should follow that and so on. It is not designed to produce text which is accurate, but text which is 'plausible'.

There is no solving of problems beyond the statistical assessment of which word would most likely come next. I’m not therefore convinced that ChatGPT is what I would think of as AI.

The accuracy of the output is far from reliable. In fact, I would argue that the best description of ChatGPT is GIGO (Garbage in, Garbage out). As its output is derived probabilistically from what appears to be the internet at large, its input data will vary from accurate statements of fact to downright lies; it will be subject to all the biases to which we are subject.

ChatGPT has no means of validating that data. It will therefore necessarily produce statements as reliable as those emitted by the Donald or Boris. Obviously, by chance, ChatGPT may give us the right answer. It may even give us the right answer more often than not, but we will not be able to know (without checking) which of its statements we can actually rely on. So then GIGO.

How indeed could ‘validation’ be achieved? For certain uncontroversial things this may be possible (pace Popper), but once we enter the realm of competing opinions, then all that a LLM such as ChatGPT can do is paraphrase (in a rather banal way) the various views which protagonists have expressed.

To employ humans to tell it when it’s going wrong, as some developers are doing, seems like a never-ending and very controversial job. And to limit its access to supposedly reliable parts of the internet is a further recipe for never-ending dispute.

The intelligence which underpins science and engineering (and quite a lot else) depends upon seeing previously unmade links and also appreciating their importance - Newton’s “standing on the shoulders of giants”. This however is not a part of its brief.

Are we then supposed to sift through its output in the hope, ourselves, of spotting something novel – perhaps a new theory of the meaning of life?

So in what way is it intelligent? What is its significance? It can paraphrase what others have said. But it is also a salutary warning of the need to check what you’re told. Which means that ChatGPT is, by virtue of its very shortcomings, capable of being dangerous: simply because of peoples’ unthinking acceptance of what it tells us.

AI, in its generally accepted form, is rather different to a LLM, typically being a pattern recognition algorithm operating in a particular niche with a narrowly defined aim. Doctors can diagnose cancer from x-rays. AI can do the same, and with as great or greater accuracy. They do not do it by ‘looking’ at the x-ray, but by examining all the data points making up the training images and trying to correlate that data with the result already arrived at by the human doctors. Sometimes the AI finds links which we don’t see and so can be more accurate than the human expert when subsequently looking at x-rays of actual patients. In this sense, they have a form of intelligence and the ability to check the validity of their results. Some AI can carry out reprogramming of itself in order to achieve better, more accurate results in whatever niche it is working.

But currently we do not have AI with what the experts would call ‘general intelligence’ - our ability to solve problems in a wide range of contexts from first principles. My question for the webinar (not answered) made the point that our human general intelligence comes from millions of years of evolutionary pressure to find solutions for problems which exist in the world with which we interact. And this is all driven by our emotions and aims, including our wish to survive. For AI even to take its first steps towards general intelligence it would need a set of aims and drives ('emotions'): intelligence is completely incapable of doing anything ‘intelligent’ without having an actual purpose. It does not exist in a purposeless vacuum.

If we wanted it to be able to use general intelligence to solve problems in a way useful to us, then surely we would have to provide it with a comprehensive knowledge of and interaction with the world (somewhat difficult), together with an understanding of our emotions and drives - an understanding which even we don’t actually have; which in any event varies from person to person. And granted that we use our limited knowledge of others’ psychology to manipulate them, might AI (even accidentally) do the same?

So where does this leave us? We have LLMs which on a limited scale can usefully engage in simple drafting, such as is now used (successfully) by Octopus Energy to help its staff to respond to consumer queries. They are otherwise however just as likely to produce lies as truth.

With AI we see something which takes up an increasing amount of our energy in vast warehouses, with the resulting need for even more energy to cool the computers. Which means that we need to make intelligent choices as to how best to use it.

We could throw the resources at it to go down the route of general intelligence, but to achieve that would require such speed of computation, access to knowledge and the (very) careful design of computer-based emotions that we’re simply not in a position to do so at all adequately.

There have been calls for regulation and I think that is justified. What we don’t want is individuals or their companies, as rich as Croesus, putting resources into competing versions of general intelligence models which would at best be half-baked.

What we need is ‘encouragement’ of niche AI, algorithms designed to solve the individual problems which require their particular skill, but under the watchful eye of well-resourced regulators. What we don’t want is a continuation of the current wild-west in computer algorithm development. Is it not better simply to continue with AI as a specialist pattern recognition tool, rather than try to enable it to emulate human intelligence - in a type of folie de grandeur?

Paul Buckingham

9 May 2023

Home      A Point of View     Philosophy     Who am I?      Links     Photos of Annecy