The Chair of Capital Economics dives deeper into the real implications of Artificial Intelligence
It seems to me that the impact of Artificial Intelligence (AI) has been exaggerated in a range of ways. When it comes to the future of mentoring, surely the human aspect is what education is really about. I have benefitted from it myself on many occasions during my education and it is just irreplaceable – that certain spark of inspiration which gives you motivation and gets you to understand something.
In my book The AI Economy, I cite a number of incidents of fuzzy logic which human beings cope with and which, to the best of my knowledge, so far artificial intelligence can’t. I am thinking of instances where something is either logically ambiguous or logically misleading. We have a way of seeing what the meaning is, but even the most sophisticated computers don’t. For instance, in the film Paddington, there’s a wonderful bit where the bear goes on the tube and he starts to get on the escalator.
He sees a sign that says ‘Dogs Must Be Carried’ – so he races up the escalator in the wrong direction, runs out into the street and steals a dog so that he may comply with the instruction that dogs must be carried. It’s absolutely wonderful. Human beings look at a sign like that, and they don’t need to wonder for very long about the fuzzy logic. They understand what it means: if you have got a dog it must be carried. I suspect computers are much the same.
Another one that I like very much is the sign in a lift: “Do not use in case of fire. What it actually means is: if there is a fire do not use this. That’s not what it says. There are a whole series of cases where the human mind is not just a computer that is based on logic – and it’s very difficult to replicate that sort of thing.
In The AI Economy, I also quote areas where this whole subject spills over into certain sorts of philosophical or even theological topics. These are notoriously difficult to get into. I have got a chapter called ‘Epilogue’ at the end where I touch on issues regarding the nature of the human mind. I refer to this great mathematical physicist who recently got the Nobel Prize Roger Penrose, who is now doing work in this area, even though he is 84. His big contention is that he thinks there is something very special about the way the human mind works which a computer can never replicate.
Sacredness is a very important word. Penrose says that he has come to think that the universe is like a three-legged stool. One of the legs is physical reality – the sort of stuff the physicists study. The second leg is mathematical logical truths which are eternally just there. But the third leg is consciousness and he says that human beings instinctively know this – but science knows very little about this third leg and is loath to recognise its importance.
It’s all a big challenge to the AI geeks, as I call them. It’s bad enough what they have to say about economics, but what they say about these philosophical questions is just extraordinary. On the one hand, the AI geeks, bravely overestimate the bad side of all of this – but they also underestimate the good side for human beings when it comes to what can actually be done.
For instance, there’s a section in the book about driverless cars. I am a sceptic on this question, but I think there are going to be more and more uses for driverless vehicles: we’ve had driverless shuttles at airports for goodness knows how long. Even so, what I have great difficulty in imagining is driverless cars in city centres without the complete remodelling of the nature of cities, though the real fanatics argue that’s exactly what should happen.
It should be perfectly feasible to have driverless vehicles – either lorries or cars – working pretty successfully on motorways where effectively the solution might be a bit like railways – where you haven’t got rails guiding them but you have got something else essentially operating according to the same sort of principle.
The difficulty comes with the unpredictability of what happens in urban centres: a child rushes out in front of the vehicle – a cyclist veers over some sort of dreadful weather which impedes the functioning of the vehicle. I find it very difficult to imagine a driverless vehicle being able to cope with all those things – and, indeed, the tests that have been done so far reveal that result. Given that, it’s extraordinary when you follow the predictions of the AI professionals: that we are all supposed to be driven around in driverless vehicles now for about 10 years at least.
Of course, it has not happened and all these tests that have taken place have been in places like Arizona with clear bright days, uncrowded roads and not in London in February on a winter afternoon. I see a sort of middle of the road solution to all of this whereby there could be quite a lot of driverless vehicles in certain environments. And where it is possible, the point is that there will be huge benefits.
Another particular example is agriculture. Where you have got this defined space of huge agricultural fields, there’s no reason why you can’t have driverless tractors and other agricultural vehicles in an area: it seems to me that would be brilliant from all sorts of points of view. In addition, having the tube network run completely without drivers would be a marvellous idea: it would mean big savings there.
The response of the driverless vehicle enthusiasts to all this is quite interesting. First of all, they say it’s all a matter of time until we develop the software that’s going to deal with all that – and eventually, after so many failings, the current line is that they can cope but that they’ll need to remodel cities. Essentially all city centres will be redesigned so that there aren’t entry points for cyclists and children running out. In other words the roads in cities become the equivalent of the lane motorways I was talking about earlier. This is sheer madness. The whole point of the city is to have interaction between vehicles and cyclists.
Besides, the enthusiasts underestimate the spiritual and emotional implications for human beings living in those cities when it comes to such a vast restructuring, and they also don’t seem to take into account the economic cost. Even if all this is technically feasible, it’s beyond billions to refashion cities to make these vehicles function. Aviation is another example where the AI geeks overestimate the likely impact of technology. For example, I don’t think many passengers or would-be passengers would be prepared to get onto a plane which didn’t have a human pilot up front, even as we know most of the flying is done by computer: they will still want to feel that there is a human being there.
Similarly, there are some examples of captainless, or pilotless boats. Again one can imagine this working across quite small and narrowly defined stretches of water: a ferry across a fjord in Norway or something like that. I can also imagine quite a few examples of that in Britain, such as the area around Studland in Dorset. That’s had a ferry going across the mouth of Poole Harbour since I don’t know when – and to the best of my knowledge it’s still driven by a human.
I can imagine that being done by some form of artificial intelligence – but I can’t really imagine ocean-going ships without any human beings on them even though quite a lot of the steering management of the ship is done by computer on the big cargo vessels. I think in truth, human beings will always have a need for other human beings.