Magazine

FinitoWorld magazine cover featuring Taylor Swift with the headline 'Shake It Off'.

Editors Pick

Why you need to have a happy workforce

9th April 2024

Long Read: Me, Myself and AI

Christopher Jackson

 

When I consider the question of AI as it relates to the matter of myself, I find I have to begin with a board game. After playing chess occasionally in childhood, by 2021 I began playing with bots. My return to low level chess formed a crucial plank in my plan to give up Wordle – that very 2022 fad to which I became pointlessly addicted for a month or so. I decided to swap one addiction for a slightly more meaningful one and became a regular visitor to chess.com.

I play chess to a level which may almost be deemed infantile, full of incomprehensible failures and idiotic myopia (“Oh, I didn’t realise my Queen was there”) where humiliation constantly vies with a curious sense of pleasure. One feels faintly intellectual just by trying to play chess at all: there is a sense of a brain muscle which needs attention getting a minor massage.

As many will know, on chess.com, if you want to play humans, you can set yourself up with an ID and then play against another flesh-and-blood person roughly of your level. Interestingly, when you play a fellow human, you can sense their humanity even when there is a computer screen intervening. It was John Arlott who said in relation to the cricketer Jack Hobbs that what made him great was his ‘infallible sympathy for the bowled ball’. In chess.com something similar seems to happen: when you play a human being, each move feels personal in some way – and may even have something to do with a shared humanity. Sometimes a pause will occur and you’ll think: “They’re probably making a cup of tea.” Or when the moves come rapidly, you might think: “I see you’re going out for the night and you need this game finished in the next five minutes.”

But the problem with playing people is that they’re unreliable: either they’re busy, or else they can be petulant losers. Quite often, you’ll begin a game on chess.com with another human and find that the time between moves can be inexplicably long, and you’re not sure if they’ve left the game and not bothered to resign. Sometimes, you begin to win and your opponent capitulates depriving you of the real pleasure of chess: the unfolding of the logic of victory. In a just chess world, you begin to sense victory long before check mate is confirmed, but it’s part of the sportsmanship of it all to let the drama unfold: the defeated are meant to fight, to let their opponent pin them to the wall. All this means that playing humans on chess.com is something of a lottery. It can be anything from fun to a mild disappointment to a notable waste of time.

Which is why, one evening, I thought I’d bypass all that nuisance by playing a bot, starting with Coach Danny – named after Danny Rensch, the chief chess officer of chess.com, who has a score of 400, which is to say he’s programmed to be a rank beginner. Victory against this bot comes relatively easily, and as I have gone on I’ve eventually been able to compete with intermediate players.

But it feels different. There is never a thought-delay between moves, and never any sense of anything shared: there is a gulf between you and your opponent, the gap between robot and human – between AI and me. One feels infinitely separate from one’s AI opponents: they never make a cup of tea while they consider the way in which you cunningly developed your rook. They never expose their queen accidentally because they’re rushing not to be late for the cinema.

The more I played bots on chess.com the more I came to feel my humanity. I felt how separate the bot was, isolated in its bottishness: a different kind of thing altogether. The division is this: I possess a self – or to use a now antiquated term, a soul – and the bots don’t.

By 2024, I now know that those matches of chess, pitted against a computer, were harbingers. Now it’s not just my chess ability which is to be matched against AI – but my whole self, my soul: me.

 

The Looming Revolution

 

I have been reading William Hague’s excellent biography of the great campaigner and statesman William Wilberforce. Since it is a faithful reflection of the great man’s life, the book largely concerns Wilberforce’s extraordinary contribution to the abolition of the slave trade. Wilberforce’s campaigning gathered pace through the 1780s – and Hague brilliantly depicts the way in which momentum was building in 1788. In fact, the reader is left with the distinct impression that in any normal year, the slave trade would have been abolished in 1789. But of course 1789 was by no measure destined to be normal: it was to be the year of the French Revolution. War would have to be prioritised. The abolition of the slave trade would be delayed until 1807.

In early 2024, we are talking about any number of things: the return of Trump, the the interest rate, migration. But are we doing so in ludicrous ignorance that we are on the cusp of a similarly sweeping force?

When I meet the Secretary of State for Levelling Up, Housing and Communities Michael Gove, I mention the French Revolution analogy to him in relation to AI. Gove agrees, and explains the enormity of the situation to me: “We’ve had two Cabinet sessions about AI and the point was made at one of them that this was perhaps the most consequential issue with which we have to wrestle. AI has the potential to transform our world to an even greater extent than nuclear power and the hydrogen bomb.”

So what kind of transformation will this be? Gove isn’t sure. “It is difficult for us to envisage quite the scale of transformation that AI will generate in our societies. At different times, people predict the impact of technology – wiser minds than me will know you overestimate then underestimate what the impact will be. But AI will undoubtedly be impactful. It may further entrench and indeed exaggerate inequalities and of course it has the potential – unless you have proper alignment for catastrophic outcomes.”

This echoes what we sometimes find in the discourse. All anyone can agree on is the enormity of the oncoming shift. There is a terrible sense of unpreparedness in all of us – because how do you prepare for something when you don’t know what it will be?

Some of this is to do with the structure of our economy and the question of where wealth, and therefore knowledge, happens to reside. It’s possible to intuit that AI’s true nature is hidden behind the veil of the gigantic and essentially impregnable tech companies where all this will unfold – and to a certain extent has already unfolded.

Unless you happen to have a management role at Google or Tesla, you are given scraps. In Walter Isaacson’s superb biography Elon Musk there is an astonishing moment when Musk is sensibly arguing with Google co-founder Larry Page that safeguards need to be built into AI. Page accuses Musk of being a ‘specist’ – someone in favour of his own species and goes onto argue that humanity needs to be replaced and ought to accept its own demise. “I fucking like humanity, dude,” comes Musk’s reply. When Page seeks to buy DeepMind, Musk tries to put together financing to block the deal. “The future of AI should not be controlled by Larry,” Musk says, putting it rather mildly, and echoing my own thoughts.

What Musk ends up touting towards the end of Isaacson’s book is “maximum truth-seeking AI. It would care about understanding the universe, and that would probably lead it to want to preserve humanity, because we are an interesting part of the universe.”

This is already heady stuff and many people will wonder how, by dint of not being tech billionaires, they have accidentally outsourced their entire future to the likes of Page and Musk.

Whatever one thinks of the tech barons – and some of them are more likeable than others – it probably is the case that a grounding in history, philosophy and theology isn’t their strong suit. They also, I have decided, don’t particularly mind about the economic future of journalists.

 

Journalism v the bots

 

Chess turned out to be merely the opening salvo in my experiences with AI. By 2023, with the release of Chat GPT, AI seemed to want to do my job for me – or to help me do my job, depending on which way you looked at it. Writing being so much more central to my life than chess, this in turn seemed to up the stakes between me and AI. It went much nearer the heart of me.

The self and the soul. These are not necessarily wheelhouse topics for an employability magazine, but since the meaning of this turns out to be decisive for the future of work and therefore of humanity, its discussion can hardly be avoided.

The argument may be lost on Larry Page – and to a certain extent on Musk – but it runs like this. Human beings have always had something rather unique and mysterious about them. There is a certain bestowed strangeness attached to us, on which basis we’re constructed. This is valuable – even infinitely so. We don’t know where we come from, or where we’re going, but we’re can’t quite escape the nagging suspicion that we matter. This could be called ‘specist’. But it is quite likely to answer to reality too – it is certainly a more respectable opinion than Page realises. I don’t think, for instance, that there is a major figure in history from Shakespeare to Bach and Michelangelo who would seriously dispute it. When one hears Page take the opposite view, one wonders if he seriously thinks the creation of Google entitles him to override the creator of Hamlet.

This in turn opens up onto the notion of sanctity. To put it mildly, our sense of ourselves has never been as straightforward as we would like. One trouble with human beings is that we tend to note our value when it’s absent – for instance, when there is some terrible atrocity in the context of war. When that happens we think: “Hang on, aren’t we meant to be more than that?” By expressing our dismay at negative development, we are indicating the value we ought to be ascribing to one another. This sense of ourselves is more likely, paradoxically, to be present at the aftermath of a shooting than it is in an AI lab at Google.

In the ordinary scheme of life, it resides in us quietly. Most of us, when we seek an ideal career, are also seeking to give expression to some sense of destiny burning within us, which we might think of as being an aspect of our self, or soul. We act as though we matter. Page would say we are delusional in this; I would say we are acting on a valid but admittedly mysterious logic which flows out of our existing at all in the first place. Sometimes, in workaholics, it might be that a large part of the self or soul manifests itself through work.

I’m sure a chess grandmaster feels that way about chess – and perhaps the chess world therefore knew early on – when the first computer beat Kasparov – what I have begun to see in 2023: namely the possibility that a computer can usurp you. Over the past twelve months, AI has begun to sneak nearer to my career – by turns like a sun about to rise, or a storm about to come.

Writing for writers usually does form a large part of the soul. It tends to manifest as an obsessive condition, and the stories are legion of writers failing to separate out work from their personal lives to the detriment of family.

The increasing popularity of ChatGPT – the acronym at the end stands with an almost laughable ugliness for Generative Pre-Trained Transformer. Its mounting popularity caused me to wonder how skilled the bot would be at writing a Finito World leader.

To that end, I issued the command: write me a leader for an employability magazine about the importance of skills in education. It replied:

 

Welcome to this edition of our Employability Magazine, where we explore the dynamic intersection of education and skills that propels individuals toward success in the ever-evolving landscape of the professional world. In an era marked by rapid technological advancements and global connectivity, the traditional paradigm of education is undergoing a profound transformation.

 

Faced by the ludicrous dullness of this, I experience a rise in self-confidence that the plight of the journalist may not necessarily be an entirely hopeless one at least for the next few years. I might, however, feel differently if I were a copyrighter.

The situation is the same, if not increased, for songwriting, painting or poetry where the link to the self is greater than in journalism which is in some way to do with the conveying of factual information, and its organisation.

To take poetry as an example: its creation is much more mysterious than the creation of journalism – it comes from, to use Martin Amis’ notion, the back of the brain. Writing poetry can actually feel like a complete possession of body and soul in a way in which writing journalism does not.

It is not surprising therefore that AI and the arts have already clashed, most notably in the 2023 Writers Guild of America strike. The moment which most caught my eye occurred in Nick Cave’s indispensable Red Hand Files, in August 2023. As readers may know the singer-songwriter famously takes questions from his fans on any range of matters. ChatGPT has come up a few times.

Cave takes a very anti-AI view. He views true creativity as being fraught with self-murder and challenge – its difficulty is, according to him, what renders it worthwhile. Art is hard work and meant to be so – it is a sort of voyage along suffering. Cave writes: “ChatGPT rejects any notions of creative struggle, that our endeavours animate and nurture our lives giving them depth and meaning. It rejects that there is a collective, essential and unconscious human spirit underpinning our existence, connecting us all through our mutual striving.” So what, for Cave, does Chat GPT ultimately amount to? “ChatGPT’s intent is to eliminate the process of creation and its attendant challenges, viewing it as nothing more than a time-wasting inconvenience that stands in the way of the commodity itself. Why strive?, it contends. Why bother with the artistic process and its accompanying trials? Why shouldn’t we make it faster and easier?”

And so what should we do about it? Cave’s response strikes me as very beautiful. “As humans, we so often feel helpless in our own smallness, yet still we find the resilience to do and make beautiful things, and this is where the meaning of life resides. Nature reminds us of this constantly. The world is often cast as a purely malignant place, but still the joy of creation exerts itself, and as the sun rises upon the struggle of the day, the Great Crested Grebe dances upon the water. It is our striving that becomes the very essence of meaning. This impulse – the creative dance – that is now being so cynically undermined, must be defended at all costs, and just as we would fight any existential evil, we should fight it tooth and nail, for we are fighting for the very soul of the world.”

Therefore, we’re in a mortal fight brought about primarily by economic circumstances. Technological advance throws to the top of the pile people like Page who can precipitate astonishing developments while knowing very little about the broader context of what they’re doing. They are immune to knock-on effects, because they are financially unassailable, and blissfully ignorant of what they’re doing, knowing no history or literature. It’s sometimes the case that to understand the world you have to go to the artists and the philosophers as well as the scientists and the tech entrepreneurs. The worrying thing is that the tech entrepreneurs don’t need to do that. They either want to plough on because they believe in progress for its own sake, or because they want to see the money rolling in – or usually both.

There are minor silver linings. It seems likely that Musk, the most safety-conscious of the AI moguls – will succeed by dint of owning Twitter with its vast linguistic datasets and Tesla with its visual dataset of driver behaviour in competing with ChatGPT and DeepMind. More experienced with his success at SpaceX and Neuralink at rapidly building competitive engineering companies, there seems a good chance that he will define the AI future to some extent. It may therefore be very fortunate that the most gifted entrepreneur-engineer of our time at least doesn’t take the Larry Page view of life. But it might not be a bad thing too, if he were to listen to Nick Cave.

 

The Cry of the Theologians

 

He could also do a lot worse than listen to the great humanitarian and religious thinker Sir Terry Waite. Waite begins by explaining to me the irreversible nature of the progress the big tech companies have made. “I don’t think you can ever reverse these trends,” he tells me. “I think there is a trend towards artificial intelligence and that is now moving and it will not be stopped. What can be done is somehow to control it. It has very real dangers, of course, and I don’t fully understand the full implications of what AI is going to be.”

Waite tells me of a minor AI scrape he’s recently been in. “I have been suffering this morning from the effects of a very minor form of AI – namely, a parcel should have been delivered, it was delivered to the wrong address and every time I get close to speaking to any human being about that I get some automated response which gives me a series of answers to questions I pose which bear no relation to my problem. I need to know where it is and where it’s gone. They tell me it’s been delivered but it hasn’t been delivered here; it’s been delivered somewhere. That’s very minor and it’s not the full use of AI but it’s an indication of what is coming where we do not speak to another human being.”

In a sense Waite’s worries are rather similar to Cave’s – a world where human closeness, and even meaningful interaction, is radically curtailed. “Maybe this will be different in the future but a machine does not have the human qualities of compassion and love which is central to human existence and central to the teaching of Christ who constantly emphasised the values of compassion and love,” Waite continues.

Waite also explains how we have tended to express these values throughout history. “You can express compassion in a number of ways but in general it’s expressed with face-to-face meeting. There is a very good book recently written called The Matter of Things by a neuroscientist and writer called Iain McGilchrist. He speaks about the two sides of the brain: the left hemisphere and the right. The left hemisphere is the area of calculation and decision-making.  The right area is the area of imagination and spirituality and all that goes with that. He points out that much of modern society is now concentrated on the left hemisphere, especially in the teaching of mathematics and the teaching of science. These things are important – I don’t deny that – but they’re now being pursued to the detriment of history and the arts. We are being put out of balance.”

And what are the dangers of that? Waite is unequivocal. “Lack of holistic understanding does not make for rounded human beings.”

The way society appears to have gone is that left hemisphere-dominant people have created things which the right-hemisphere dominant would never have dreamed of, and monetised these creations aggressively, essentially marooning the right-hemisphere dominant – among which I count myself – in a world they didn’t particularly want or need. Left-hemisphere people tend to make a big impact on reality, and their version of society has a momentum which can’t be realistically reversed. But they are not dreamers – and dreaming is important too.

The predicament may be more serious still. The right-hemisphere seems to me to link far more reliably to human meaning, and human meaning is probably more important than analysis and measurement (left-hemisphere thinking). It could therefore be that we have created the entire future out of the wrong side of our brain. But a decision-maker like Larry Page, full of self-importance and unlimited money, would likely give short shrift to the notion that their worldview is false: they will not feel it to be so in their gilded boardrooms and, again, have no particular reason to listen to you.

We therefore face a precarious situation where Waite and Cave are right but that nobody in positions of decision-making power will listen to them. It is this which has led many to argue for government intervention.

So will AI affect everyone in the world similarly or will there be different outcomes on a country-by-country basis? The answer to that is that while individual governments will harness potential in different ways, the overall impact is likely to be pretty broad. And how is the UK placed? Michael Gove, as might be expected, is optimistic: “As nations go, we’re in a better position than most. But we may well find ourselves in less than a year’s time reframing many of the questions we’re discussing now.” Then he pauses and says, almost defiantly: “But human nature itself won’t change.”

Yes, but will it be eradicated? Here, the international landscape becomes important and cannot be realistically divorced from geopolitics. And geopolitics, if we’re honest, isn’t in a great place either.

 

Cleverly does it

 

When one reads the coverage, it can seem as though America is way ahead and without any serious competitors. That’s partly because we’re discussing the brands we all know and bankroll: Amazon, Facebook, Google, Tesla, Twitter, Microsoft.

But worryingly it has been said sometimes that China is ahead – that’s the view of the philanthropist Mohamed Amersi whose brilliant autobiography Why? has just been published. He tells me that China is in a far more advantageous place in terms of AI technology than many realise. “China is way ahead,” he tells me. “One indication of this is the number of patents filed – if you google patent filing you’ll see that China is way ahead of the US, and perhaps ahead of all other countries combined. It’s worth noting also that China has put together a code for regulating AI. This law which came into force in August 2023 and was internationally ground-breaking. When you put those two things together the consensus is that China is in out in front and by a long way.”

This seems sufficiently serious to be worth communicating to the then Foreign Secretary James Cleverly. When I ask Cleverly about China’s progress he puts his finger immediately on the key issue. “This is one of these classic foreign affairs quandaries,” he tells me. “China in so many areas is on a completely different page to the UK. This is partly to do with history and culture, but their attitude when it comes to the relationship between the state and the individual is completely different to the UK’s.” For Cleverly, this has clear ramifications: “Therefore, their use of AI, what they might utilise AI for, and what they are fearful about in terms of AI – all these things are likely to be different.”

But does he know how far advanced China is on its AI development compared to the West? “The truth is, in direct answer to your question, I don’t know whether they are ahead of the UK or ahead of the US or behind us, or both. They are a very closed society but the fact is they’re on the podium. They are one of the top AI countries in the world. They are a Top 10 country and therefore they are inevitably developing AI technology. It almost doesn’t matter exactly what the ordering is, they are – and will continue to be – a very serious global player with a fundamentally different set of values.”

For Cleverly this opens up onto the question of how global safety agreements should be structured going forwards. “If we try to build some kind of framework for safety and rules-for-the-road limitations, as we do for example with nuclear weapons technology and chemical weapons technology – as we are now beginning to do with the use of space hardware cyber rules – countries are less likely to break the rules if you include them.” Cleverly therefore reasons that China ought to be included in negotiations. “If we don’t include China at all – if we create a western framework and consciously exclude them right at the start of the process – I believe they will feel liberated to do what they want to do and that may well not be in our best interests. We need to at least try to persuade them to sign up to some reasonable pragmatic behaviour around AI safety. There is no guarantee that they will play by the rules, but it gives us a better fighting chance.”

 

Seldon Says

 

As the AI debate continues, I realise that the effects for me are of less significance than the effects on the lives of my two young children, who are aged seven and three. It was Christopher Hitchens who said that once you have children your heart lies outside you – the self, the soul, appear to extent outwards in time and space beyond one’s own predicament into theirs. In my own case, the older already displays a passion for architecture which strikes his teachers as being outside the norm; he has already been given a prize in respect of this. But when he says: “I’m going to be an architect, Daddy,” I find myself quietly wondering to myself what being an architect will mean, if it means anything, when he’s old enough to make a living doing it.

For thousands of years, being a father has meant handing on the world reasonably unspoiled to your children. We might try to improve it if we’re lucky, but we have tended to assume that it will have been broadly preserved. My father expected the world to be intact for me – and it was, at least until now. The fact that I am unsure of my ability to replicate what had seemed a fairly basic feat can sometimes cause me disquiet.

It is therefore probably true to say that I am invested in the idea that AI will actually not be the doom-laden scenario which many predict for it, but instead an unlooked-for boon, where Musk’s vision of human beings existing happily alongside robots is fulfilled – and my son gets to be an architect if he wants to be.

If I look for these apostles of positivity I find one in Sir Anthony Seldon, who has written a book called The Fourth Education Revolution which paints a rosier picture around AI’s impact on education.

Seldon was in this conversation pretty early. He tells me: “I started writing that book in 2017, seven years or so before AI was as much talked about as it is now. One of the governors at Wellington College Tim Bunting put me onto it. We talked about how it would change everything about education.”

So how does Seldon view AI in the education space? “It is the understanding that AI would come along at a time when we still have a fundamentally 19th century model of schools – and to some extent universities – where the lecturer and the teacher’s at the front, students sit passively, and everyone moves at the same speed. That whole image of white boards and so on is hardly different from the whole model which was absurdly redundant by the late 20th century. I felt that AI would be the dynamite that will finally blow it apart.”

And why is that? “That’s because it compensates for the deficiencies and endemic failures of the third revolution, which is that everyone has to move at the same pace, in every subject, regardless. Everyone has to work at the same time of day in the same fundamental way.”

For Seldon this flat-footedness has severe ramifications. “It makes social mobility static or declining,” he tells me. “Teacher workload gets worse. But above all the model assumes that the student should produce the right answer at the right time in the right way, and isn’t interested in what the student thinks.” Having two very individual children who don’t fit easily into boxes as I do, this is cheering to hear. The quality I have always most valued is curiosity and if AI can accelerate that, while having built-in safeguards, then I can imagine a very bright future indeed.

Furthermore, the pre-AI education system has been bad, Seldon says, for our well-being. “Homogenisation is a key contributor of mental unwellness and devotes itself to a very narrow range of human intelligence. As a system, it is very good at helping people pass exams but not at helping them learn how to live, how to lead meaningful lives, be good physicists and good historians, or how to be good MPs, or even good parents.”

So AI could help with that? Seldon replies: “If it’s harnessed early enough, it can overcome all those things.”

That sounds promising although there remains the suspicion that Larry Page, in addition to not being a historian, artist or theologian, is also not an educationalist. Again the sense is of unchecked and rampant momentum, and worse, a momentum primarily driven by financial gain. Even so, I am also prepared to admit that Larry Page isn’t all powerful and that there is clear evidence here that good can come of AI too.

 

Message in a Bootle

 

Seldon’s arguments, if taken to their conclusion in other areas of life, could form the basis of an even sunnier set of predictions. If you want this full-scale optimism then you need to go to Roger Bootle, the economist and chair of Capital Economics, who has authored the excellent study The AI Economy: Work, Wealth and Welfare in the Robot Age (2019).

I ask him how he came to write the book. “There has been this massive obsession in the media and elsewhere about AI and robots and the conclusion was fundamentally negative,” he tells me. “Most people argued that this great technological advance was going to bring some form of impoverishment because we were all going to lose our jobs. Robots and AI were going to take over and I thought this was a pretty important subject so I got stuck into reading about it. Most of it turns out to have been written by non-economists claiming to be technical experts. I discovered that they’d got their economics upside down and I thought it was time an economist got to grips with the subject, which I did, and my take on it all was fundamentally optimistic – so my book really does stand out from most on the subject.”

So what are Bootle’s reasons for optimism about AI? “The first thing to appreciate is you have got to start with the history. Technological improvements have been going on for ages. Since the late 18th and mid-19th century, we have had a wave of technological developments and improvements which have knocked out various job skills – and in some cases industries – and others have sprung up to take their place and for me the question was always: ‘Why should this be any different from that?’”

And what did Bootle discover? “When you got down to the specifics, what the pessimists focused on was the idea that essentially there were going to be no areas where human beings could compete with AI and robots. Therefore, they leapt to the conclusion that this is different. I looked at that and I thought it was bunkum. For a start, the capability of AI and robots is massively exaggerated in the literature put out by the enthusiasts.”

I ask Bootle for examples. “Every time I go through an airport I am amused by the AI-enabled automatic passport machines which are fine when they work, but beyond them there are rows and rows of border force officials guiding you.  Robots have been working in industry for 40-50 years but the idea of an omni-capable robot is a long way off because they don’t have sufficient manual dexterity. If you have a robot maid, for instance, to your house, they can’t plump the cushions, or tie shoelaces: there are umpteen things they just can’t do.”

But mightn’t that technology improve? “It might, but I think the most important thing is to realise what human beings are. I quote someone in the book who says that the human brain is a computer that happens to be made of meat. I think it’s fundamentally wrong. There is something about how the human mind works which is very different from the way that a computer works – especially the capability of making jumps which a computer can’t make. But on from that comes the most central thing: human beings are social creatures. They like to relate to other human beings; they are naturally suspicious of machines and sympathetic to humans.”

In this Bootle echoes Sir Terry Waite and Nick Cave – but his observation is a cause for hope not despair. According to Bootle, there are therefore a whole range of areas where humans, contrary to the horrific predictions, are in fact indispensable. When I ask him to name one, he is swift. “Let’s take medicine, for instance, where not only is there room for greater advances and record-keeping and so on but also diagnosis. But some people have suggested this is going to lead to the redundancy of medical professionals, with surgeons doing robotic surgery. This is complete and utter nonsense. Apart from anything else, human beings need to interact with and trust other human beings and so you are not going to go along with some sort of AI-disembodied voice telling you you’ve got to have your right leg chopped off and say: ‘Okay, fine, I’ll go ahead and do it’. We will need to have human beings intermediating between us and AI. Of course, at the moment, robotic surgery has brought some terrific advances but what it hasn’t done is make surgeons redundant. Instead it has made surgery much more accurate, reliable, quicker and potentially having it done at remote distances.”

This seems to hold out some hope for me to continue working as a journalist – and more importantly, for my son to one day practice as an architect. It also means that both my children are much less likely to do dull jobs. After naming checkout tills and passport control as jobs we need to get rid of, Bootle lands on translation services as a good example of the rate of progress. “When they first started they were completely useless. They are now not bad. It is still the case it seems to me in the future that there will be professional linguists who are ultra-skilled in the language with its literary flushes and its ambiguities and so on – you will want to employ those for specialist cases but if you just ordinarily want to translate a letter that’s written to you in a foreign language you just plug in translate and most of the time it does a pretty reasonable job and it will be getting better.”

So it’s those middle jobs which will be under threat? Bootle agrees. “Basic accounting, basic legal services. It is suggested that the development of AI and robots is going to substantially undermine the demand for labour from people at the bottom of the heap. I don’t think that’s right. I think it will undermine the demand for labour of people a bit above the bottom of the heap. A lot of manual tasks I don’t think will be replaced at all. It’s the clerical positions or the lower reaches of the semi-educated middle classes – people doing admin, clerical type jobs. I suspect a lot of those are going to be replaced.”

So overall, this will be good for productivity? “I see it as fundamentally something that is going to massively increase our productivity over time. Just like all the other things that have occurred since the industrial revolution some people will lose their jobs,” Bootle explains.

 

The Great Reskilling

 

So what does this mean for people who are now in jobs which are potentially for the scrapheap. Will they need to reskill? “I think that’s right,” Bootle continues. “To some extent it has already been happening. There used to be banks of typists in most firms, but all that’s gone. Your personal assistant or secretary now does other sorts of jobs to what they used to do. They use the technology but they have to develop other skills.”

So what will the impact be on the working week? Bootle explains: “Well if it is the case – as I argue it is – that this is going to make us a lot more productive then I think this is going to be one of the forces pushing for a shorter working week. In other words, if we are going to become a lot more productive, we can consume and produce a lot more based on an increase in productivity.”

So what does Bootle think will happen? “I think in general there will be a society wide move towards shorter working hours particularly I suspect a four day week and some individuals may do this more than others but the average will be shorter working hours. If you look at the historical evidence, working hours have fallen dramatically since the industrial revolution but also of course we have become an awful lot better off. We have trod that middle way already.”

Bootle is also a fan, perhaps not surprisingly, of Seldon. “I think there is scope to use AI a lot in the education process and I personally think the old system of a lecturer standing up in front of a class of 30 or 40 or in some cases hundreds of people and he brings out his notes and they then write them down – that’s absurd.”

So how does Bootle see the education future? “The way I see education going is essentially along the lines of the tutorial system. You have more one-on-one sessions which are about discussion and interaction and seminars where you have got a small number of students discussing and interacting so that the ratio of teachers to pupils or students may not change that much – but the ratio in individual teaching sessions will change dramatically. There will be a big increase in the ratio of teachers to students but there will be fewer hours doing in-person teaching because the students will be doing their other stuff remotely.”

This all sounds broadly positive, assuming those people in vulnerable jobs can be effectively reskilled, which arguably suggests a programme of a far greater reach and imagination than what governments tend to be capable of nowadays.

One can imagine that the modern day equivalent of Roosevelt’s New Deal would need to restructure the economy around the soon-to-be-unemployed clerical classes, and redirect them toward more fulfilling work.

 

That Uncertain Feeling

 

I am emotionally invested in the idea of AI as a positive – the life we are about to enter would be so much better if that were so. But I find that while I can accept much of what Bootle and Seldon say, I find that I don’t trust the big tech companies, nor do I particularly trust government to regulate AI effectively. Furthermore, I have read compelling evidence that suggests that Bootle may be underestimating the way in which AI technology works: it isn’t something which is programmed, it is something which grows. And if it grows, then we have no more control over how it develops than we do over the direction of the branches of a tree.

In short, there is something spooky about the technology. I cannot escape the notion that AI will be both good and bad – as the Internet has been. This sentiment is echoed by the great filmmaker Guy Ritchie who tells me: “I think I’ve got a handle on it. It’s going to be brilliant – and in equal measure it’s going to be awful. I don’t think it’s any more complicated than that. In proportion to how brilliant it’s going to be, it’s going to be awful too. There seems to be a consistency to anything that’s great – it’s awful. I can’t see how the equation isn’t going to work like that because all those things do. With communication came great benefits and at the same time great deficits.”

This feels just, and Ritchie explains how that viewpoint can help anchor us when it comes to the advances coming down the track. ‘Everything is subject to these laws – I’m yet to see anything that isn’t. It’s the only way I can reconcile it. Otherwise it’s just a wild dog and that wild dog will end up consuming you.”

What Ritchie is saying is that AI, however major its advances, will ultimately have to conform to something like the pattern of good and evil which has been one way or another the basis of all major religions, and many philosophical systems as far back as we trace humanity. This is comforting – and it may well be true. If the universe is in fact forged somehow according to good and evil, then AI, also an aspect of the universe may very likely be subservient to these things. That would mean that our struggle will go on. It is a titanic one, but it at least has the virtue of being somewhat familiar.

When all anyone can agree on is the enormity of it all, I find myself continually coming back to the question of what life really is. It seems to me pretty certain that it is in some sense sacred, as Cave and Waite say. The cultural conservative in me, who likes old things like cathedrals and poems written hundreds of years, wants to put the brakes on. But what Waite, Ritchie, Seldon and Bootle seem to agree on is this overarching need of the human. This is a good aspect of this debate: it keeps bringing me round to the fragility and generosity of the human experience.

As I have researched this article, I have been going back and forth to school to drop off and pick up the children. Each day at 9am and 3.30pm I am presented with a sea of humanity: children in their innate optimism; parents looking harassed by the pressure of the work-kids juggle; teachers most of whom emanate a bright sense of vocation. When your daily life entails writing about robots, you see more sharply than before the beauty and the kindness in your fellow people. It might be that we’re on the cusp of some tidal wave, but I have sometimes had the image that we need hold the line, here together, on the shore.

 

Employability Portal

University Careers Service Rankings.
Best Global Cities to Work in.
Mentor Directory.
HR heads.

Useful Links

Education Committee
Work & Pensions
Business Energy
Working
Employment & Labour
Multiverse
BBC Worklife
Mentoring Need to Know
Listen to our News Channel 9:00am - 5.00pm weekdays
Finito and Finito World are trade marks of the owner. We cannot accept responsibility for unsolicited submissions, manuscripts and photographs. All prices and details are correct at time of going to press, but subject to change. We take no responsibility for omissions or errors. Reproduction in whole or in part without the publisher’s written permission is strictly prohibited. All rights reserved.
© 2024 Finito World - All Rights Reserved.