I have spent many days in rooms with groups of men (always men) anxious about their future income, where I advised them on how much to ask their companies for. Most of my clients as a scheme actuary were trustees of pension schemes of companies which had seen better days, and who were struggling to make the necessary payments to secure the benefits already promised, let alone those to come. One by one, those schemes stopped offering those future benefits and just concentrated on meeting the bill for benefits already promised. If an opportunity came to buy those benefits out with an insurance company (which normally cost quite a bit more than the kind of “technical provisions” target the Pensions Regulator would accept), I lobbied hard to get it to happen. In many cases we were too late though, the company went bust and we moved it into the Pension Protection Fund instead. That was the life of a pensions actuary in the West Midlands in the noughties. I was often “Mr Good News” in those meetings, the ironic reference to the man constantly moving the goalposts for how much money the scheme needed to meet those benefits bills. I saw my role as pushing the companies to buy out funding if at all possible. None of the schemes I advised had a company behind them which could sustain ongoing pension costs long term. I would listen to the wishful thinking and the corporate optimism, smile and push for the “realistic” option of working towards buy out.

Then I went to work at a university, and found myself, for the first time since 2003, a member of an open defined benefit pension scheme. It was (and still is) a generous scheme, but was constantly complained about by the university lecturers who comprised most of its membership. I didn’t see any way that it was affordable for employers which seemed to struggle to employ enough lecturers, were very reluctant to award anything other than fixed term contracts, and had an almost feudal relationship with their PhD students and post docs. Staff went on strike about plans to close the scheme to future accrual and replace it with the most generous money purchase scheme I had ever seen. I demurred and wrote an article called Why I Won’t Strike. I watched in wonder when even actuarial lecturers at other universities enthusiastically supported the strike. However, over 10 years later, that scheme – the UK’s biggest – is still open. And I gained personally from continued active membership until 2024.

Now don’t get me wrong, I still think the UK university sector is wrong to maintain, unique amongst its peers, a defined benefit scheme. The funding requirement for it has been inflated by continued accrual over the last 8 years and therefore so has the risk it will spike at just the time when it is least affordable, a time which may soon be approaching with 45% of universities already reporting deficits. However the strike demonstrated how important the pension scheme was to staff, something the constant grumbling before the strike had led university managers to doubt. And, once the decision had been made to keep the scheme open to future accrual, I had no more to add as an actuary. Other actuaries had the responsibility for advising on funding, in fact quite a lot of others as the UCU was getting its own actuarial advice alongside that the USS was getting, but my involvement was now just that of a member, just one with a heightened awareness of the risks the employers were taking.

The reason I bring this up is because I detected something of the same position as my lonely one from the noughties amongst the group of actuaries involved in the latest joint report from the Institute and Faculty of Actuaries and the University of Exeter about the fight to maintain planetary climate solvency.

It very neatly sets out the problem, that the whole system of climate modelling and policy recommendations to date has been almost certainly underestimating how much warming is likely to result from a given increase in the level of carbon dioxide in the atmosphere. Therefore all the “carbon budgets” (amount we can emit before we hit particular temperature levels) have been assumed to be higher than they actually are and estimates for when we exhaust them have given us longer than we actually have. This is due to the masking effects of particulate pollution in the air, which has resulted in around 0.5C less warming than we would otherwise have had by now. However, efforts to remove sulphur from oil and coal fuels (themselves important for human health) have acted to reduce this aerosol cooling effect. The goalposts have moved.

An additional reference I would add to the excellent references in the report is Hansen’s Seeing the Forest for the Trees, which concisely summarises all the evidence to suggest the generally accepted range for climate sensitivity is too low.

So far, so “Mr Good News”. And for those who say this is not something actuaries should be doing because they are not climate experts, this is exactly what actuaries have always done. We started the profession by advising on the intersection between money and mortality, despite not being experts in any of the conditions which affected either the buying power of money or the conditions which affected people’s mortality. We could however use statistics to indicate how things were likely to go in general, and early instances of governments wasting quite a lot of money without a steer from people who understood statistics got us that gig, and a succession of other related gigs over the years ahead.

The difficult bit is always deciding what course of action you want to encourage once you have done the analysis. This was much easier in pensions, as there was a regulatory framework to work to. It is much harder when, as in this case, it involves proposing changes in behaviour which are ingrained into our societies. If university lecturers can oppose something that is clearly not in the long term financial interests of their employers and push for something which makes their individual employers less secure, then how much more will the general public resist change when they can see no good reason for it.

And in this regard this feels like a report mostly focused on the finance industry. The analogies it makes with the 2008 financial crash, constant comparisons with the solvency regulatory regimes of insurers in particular and even the framing of the need to mitigate climate change in order to support economic growth are all couched in terms familiar to people working in the finance sector. This has, perhaps predictably, meant that the press coverage to date has mostly been concentrated in the pension, insurance and investment areas:

However in the case of the 2008 crash, the causes were able to be addressed by restricting practices amongst the financial institutions which had just been bailed out and were therefore in no position to argue. Many of those restrictions have been loosened since, and I think many amongst the general public would question whether the decision to bail out the banks and impose austerity on everyone else is really a model to follow for other crises.

The next stage will therefore need to involve breaking out of the finance sector to communicate the message more widely, perhaps focusing on the first point in the proposed Recovery Plan: developing a different mindset. As the report says:

This challenge demands a shift in perspective, recognising that humanity is not separate from nature but embedded in it, reliant on it and, furthermore, now required to actively steward the Earth system.
To maintain Planetary Solvency, we need to put in place mechanisms to ensure our social, economic, and political systems respect the planet’s biophysical limits, thus preserving or restoring sufficient natural capital for future generations to continue receiving ecosystem services…

…The prevailing economic system is a risk driver and requires reform, as economic dependency on nature is unrecognised in dominant economic theory which incorrectly assumes that natural capital is substitutable by manufactured capital. A particular barrier to climate action has been lobbying from incumbents and misinformation which has contributed to slower than required policy implementation.

By which I assume they mean this type of lobbying:

And this is where it gets very difficult, because actuaries really do not have anything to add at this point. We are just citizens with no particular expertise about how to proceed, just a heightened awareness of the dangers we are facing if we don’t act.

But we can also, as the report does, point out that we still have agency:

Although this is daunting, it means we have agency – we can choose to manage human activity to minimise the risk of societal disruption from the loss of critical support services from nature.

This point chimes with something else I have been reading recently (and which I will be writing more about in the coming weeks): Samuel Miller McDonald’s Progress. As he says “never before have so many lives, human and otherwise, depended on the decisions of human beings in this moment of history”. You may argue the toss on that with me, which is fine, but, in view of the other things you may be scrolling through either side of reading this, how about this for a paragraph putting the whole question of when to change how we do things in context:

We are caught in a difficult trap. If everything that is familiar is torn down and all the structures that govern our day-to-day disintegrated, we risk terrible disorder. We court famines and wars. We invite power vacuums to be filled by even more brutal psychopaths than those who haunt the halls of power now. But if we don’t, if we continue on the current path and simply follow inertia, there is a good chance that the outcome will be far worse than the disruption of upending everything today. Maintaining status-quo trajectories in carbon emissions, habitat destruction and pollution, there is a high likelihood of collapse in the existing structure anyway. It will just occur under far worse ecological conditions than if it were to happen sooner, in a more controlled way. At least, that is what all the best science suggests. To believe otherwise requires rejecting science and knowledge itself, which some find to be a worthwhile trade-off. But reality can only be denied for so long. Dream at night we may, the day will ensnare us anyway.

One thing I never did in one of those rooms full of anxious men was to stand up and loudly denounce the pensions system we were all working within. Actuaries do not behave like that generally. However we have a senior group of actuaries, with the endorsement of their profession, publishing a report that says things like this (bold emphasis added by me):

Planetary Solvency is threatened and a recovery plan is needed: a fundamental, policy-led change of direction, informed by realistic risk assessments that recognise our current market-led approach is failing, accompanied by an action plan that considers broad, radical and effective options.

This is not a normal situation. We should act accordingly.

Source: https://xkcd.com/2415/ licence at: https://creativecommons.org/licenses/by-nc/2.5/

Happy new year all! New year, new banner, courtesy of my brilliant daughter who presented me with a plausible 3-D model of my very primitive cartoon of a reverse-centaur over Christmas. And I thought I would kick off with a relatively uncontentious subject: examinations!

“Back to normal!” That was the cry throughout education when the pandemic had finally ended enough for us to start cramming students into rooms again. The universities had all leveraged themselves to the maximum, and perhaps beyond, to add to the built estate, so as to entice students in both the overseas and the uncapped domestic market to their campuses, and one by-product of this was they had plenty of potential examination halls. So let’s get away from all of that electronic remote nonsense and get everyone in a room together where you can keep an eye on them and stop them cheating. This united the purists who yearned for the days of 10% of the cohort turning up for elite education via chalk and talk rather than the 50% we have today, senior management needing to justify the size of the built estate and politicians who kept referring to traditional exams in an exam hall as the “gold standard”.

So, in a time when students have access to information, tools, how to videos of everything imaginable, the entire output of the greatest minds of thousands of years of human history, as well as many of the less than great minds, in short anything which has ever caught anyone’s attention and been committed to some form of media: in this of all times, we want to sort the students into categories for the existing job market based on how they answer academic questions about what they can remember unaided about the content of their lecture courses and reading lists with a biro on a pad of paper perched precariously on a tiny wooden table surrounded by hundreds of other similar scribblers, for a set period of time as minders wander the floors like Victorian factory owners.

And for institutions that thought the technology we fast-tracked for education delivery and assessment in the pandemic would surely be part of education’s future? Or perhaps they just can’t afford to borrow half a billion or have the the land available to construct more cathedrals of glass and brick to house more examination halls? Simple! We just create the conditions for that gold standard examination right there in the student’s own bedroom or the company they work for!

There are 54 pages to the Institute and Faculty of Actuaries’ (IFoA’s) guidance for remotely invigilated candidates. It covers everything from the minimum specification of equipment you need, including the video camera to watch your every movement and the microphone to pick up every sound you make, to the proprietary spying software (called “Guardian Browser”) you will need to download onto your own computer, how to prove who you are to the system, what you are allowed to have in your bedroom with you and even how you need to sit for the duration of the exam (with a maximum of two 5 minute breaks) to ensure the system has sufficient visibility of you at all times:

These closed book remote arrangements replaced the previous open book online exams which most institutions operated during the pandemic. The reason given was that the exam results shot up so much that widespread cheating was suspected and the integrity of the qualifications was at risk. The IFoA’s latest assessment regulations can be found here.

The belief in examinations is very widespread. A couple of months ago I was discussing the teacher assessments which replaced them briefly during the pandemic with a secondary business studies teacher. He took great pride in the fact that he based his assessments solely on mock results, ie an assessment carried out before all of the syllabus had been covered and when students were unaware it would be the final assessment. But still in his mind more “objective” than any opinion he might have of his own students.

If a large language model can perform enormously better in an examination than your students can without it, what it actually demonstrates is that the traditional examination is woefully unprepared for the future. As Carlo Iacono puts it:

The machines learned from us.

They learned what we actually valued and it turned out to be different from what we said we valued.

We said we valued originality. We rewarded conformity to genre. We said we valued depth. We measured surface features. We said we valued critical thinking. We gave higher marks to confident assertion than to honest uncertainty.

So now the machines produce what the world trained them to produce: fluent, confident, passable output that fits the shapes we reward.

And we’re horrified. Not because they stole something from us. Because they showed us what the systems were selecting for all along.

The scandal isn’t that a model can imitate student writing. The scandal is that we built an educational and professional culture where imitation passes as competence, and then acted shocked when a machine learned to imitate faster.

We trained the incentives. We trained the rubrics. We trained the career ladders.

The pattern recognition which gets you through most formal examinations is just too cheap and easy to automate now. It is no longer a useful skill, even by proxy. It might as well be Hogwarts’ sorting hat for all the use it is in a post scarcity education world. If the machines have worked out how to unlock the elaborate captcha system we have placed around our gold standard assessments, an arms race of security measures protecting a range of tests which look increasingly narrow compared to the capabilities which matter does not seem like the way to go.

What instead we are doing is identifying which students are prepared to put themselves through literally anything to get the qualification. Companies like students like that. They will make ideal reverse-centaurs. The description of life as a reverse-centaur even sounds like the experience of a proctored exam:

Like an Amazon delivery driver, who sits in a cabin surrounded by AI cameras, that monitor the driver’s eyes and take points off if the driver looks in a proscribed direction, and monitors the driver’s mouth because singing isn’t allowed on the job, and rats the driver out to the boss if they don’t make quota.

The driver is in that van because the van can’t drive itself and can’t get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance. But the driver is human, so the van doesn’t just use the driver. The van uses the driver up.

Source: Cory Doctorow, Enshittification

And, even if you are OK with all of that, all of these privacy intrusions don’t even work to prevent cheating! The ACCA, the world’s largest accounting professional body, has just announced it is stopping all remote exams after giving up the arms race against the cheats, facilitated in some cases seemingly by their Big Four employers lying about what had gone on.

Actuarial exams started in 1850, only 2 years after the Institute of Actuaries was established (Dermot Grenham wrote about them recently here). This pre-dated the establishment of the first examination boards by a few years (1856 Society of Arts, the Society for the encouragement of Arts, Manufactures and Commerce, later the Royal Society for the encouragement of Arts, Manufactures and Commerce (Royal Society of Arts); 1857: University of Oxford Delegacy of Local Examinations (founded by the University of Oxford); and 1858: University of Cambridge Local Examinations Syndicate (UCLES, founded by the University of Cambridge)), so keen were actuaries to institute examinations. However it was the massive expansion of the middle classes as the Industrial Revolution disrupted society in so many ways that led to the need for a new sorting hat beyond the capacity of the oral examinations that had previously been the norm.

Now people seem to be lining up to drag everyone back into the examination hall. Any suggestion of a retreat from traditional exams is met by howls of outrage from people like Sam Leith at The Spectator about lack of “rigour”. However, in my view, they are wrong.

Yes of course you can isolate students from every intellectual aid they would normally use, as a centaur, to augment their performance, limit the sources they can access, force them to rely on their own memories entirely, and put them under significant time pressure. You will definitely reduce marks by doing that. So that has made it harder and therefore more rigorous and more objective, right?

Well according to the Merriam-Webster dictionary, rigorous is a synonym of rigid, strict or stringent. However, while all these words mean extremely severe or stern, rigorous implies the imposition of hardship and difficulty. So promoting exams above all as an exercise in rigour reveals their true nature as a kind of punishment beating in written form, for which the prize for undergoing it is whatever it qualifies you for. Suddenly the sorting hat looks relatively less arbitrary.

The problems of traditional exams are well known, but the most important ones in my view are that they measure a limited range of abilities and therefore are unlikely to show what students can really achieve. Harder does not mean more objective. It is like deciding who can act by throwing students out, one at a time, in front of a baying mob of, let’s say for argument, readers of The Spectator. Sure, some of the students might be able to calm the crowd, some may even be able to redirect their anger towards a different target. But are the people who can play Mark Antony for real necessarily the best all-round actors? And has someone who can only stand frozen on the spot under those circumstances really proved that they could never act well?

It also means that education ends a month or more before the exams, to allow the appropriate cramming, followed by engaging all of the teaching staff in the extended exercise of marking, checking and moderating what has been written in answer to academic questions about what the students can remember unaided about the content of their lecture courses and reading lists with a biro on a pad of paper perched precariously on a tiny wooden table surrounded by hundreds of other similar scribblers, for a set period of time as minders wander the floors like Victorian factory owners. But what if instead the assessment was part of the teaching process? What if students felt that their assessment had been a meaningful part of their educational experience? What if, instead of arguing the toss over whether they scored 68% or 70% on an assessment, students could see for themselves whether they had demonstrated mastery of their subject.

One model of assessment which is getting a lot of attention at the moment, one I am a big fan of having used it at the University of Leicester on some modules, is something called interactive oral assessment, where students meet with a lecturer or tutor, individually or in a small group, and answer questions about work they have already submitted. It is a highly demanding form of assessment, for both the students and the assessors, but it means the final assessment is done with the student present and, with careful probing from the assessors, who will obviously need to have done a close reading of the project work beforehand, you can be highly confident of the degree to which the student understands the work they have submitted. It also allows the student to submit a piece of work of more complexity and ambition than can be accommodated by a traditional exam. And it needn’t take any more time if the interviews are carried out online when set against the exam marking time of the traditional exam. Something which all the technology we developed through the pandemic allows us to do, without the need for spyware.

There are other models which also assess the technological centaurs we wish our students to become rather than the reverse-centaurs we are currently dooming too many to become. It is looking like it may be time to start telling students to stop writing and to put down their pens on the traditional exam. And perhaps the actuarial profession, who led us into the era of professional written examinations so enthusiastically 175 years ago, might now want to take the lead in navigating our way out of them?

New (left) and old (right) Naiku shrines during the 60th sengu at Ise Jingu, 1973, via Bock 1974

In his excellent new book, Breakneck, Dan Wang tells the story of the high-speed rail links which started to be constructed in 2008 between San Francisco and Los Angeles and between Beijing and Shanghai respectively. Both routes would be around 800 miles long when finished. The Beijing-Shanghai line opened in 2011 at a cost of $36 billion. To date, California has built only a small stretch of their line, as yet nowhere near either Los Angeles or San Francisco, and the latest estimate of the completed bill is $128 billion. Wang uses this, amongst other examples to draw a distinction between the engineering state of China “building big at breakneck speed” and the lawyerly society of the United States “blocking everything it can, good and bad”.

Europe doesn’t get much of a mention, other than to be described as a “mausoleum”, which sounds rather JD Vance and there is quite a lot about this book that I disagree with strongly, which I will return to. However there is also much to agree with in this book, and none more so than when Wang talks about process knowledge.

Wang tells another story, of Ise Jingu in Japan. Every 20 years exact copies of Naiku, Geku, and 14 other shrines here are built on vacant adjacent sites, after which the old shrines are demolished. Altogether 65 buildings, bridges, fences, and other structures are rebuilt this way. They were first built in 690. In 2033, they will be rebuilt for the 63rd time. The structures are built each time with the original 7th century techniques which involve no nails, just dowels and wood joints. Staff have a 200 year tree planting plan to ensure enough cypress trees are planted to make the surrounding forest self-sufficient. The 20 year intervals between rebuilding are the length of the generations, the older passing on the techniques to the younger.

This, rather like the oral tradition of folk stories and songs, which were passed on by each generation as contemporary narratives until they were all written down and fixed in time so that they quickly appeared old-fashioned thereafter, is an extreme example of process knowledge. What is being preserved is not the Trigger’s Broom of temples at Ise Jingu, but the practical knowledge of how to rebuild them as they were originally built.

Trigger’s Broom. Source: https://www.youtube.com/watch?v=BUl6PooveJE

Process knowledge is the know-how of your experienced workforce that cannot easily be written down. It can develop where such a workforce work closely with researchers and engineers to create feedback loops which can also accelerate innovation. Wang contrasts Shenzhen in China where such a community exists, with Silicon Valley where it doesn’t, forcing the United States to have such technological wonders as the iPhone manufactured in China.

What happens when you don’t have process knowledge? Well one example would be our nuclear industry, where lack of experience of pressurised water reactors has slowed down the development of new power stations and required us to rely considerably on French expertise. There are many other technical skill shortages.

China has recognised the supreme importance of process knowledge as compared to the American concern with intellectual property (IP). IP can of course be bought and sold as a commodity and owned as capital, whereas process knowledge tends to rest within a skilled workforce.

This may then be the path to resilience for the skilled workers of the future in the face of the AI-ification of their professions. Companies are being sold AI systems for many things at the moment, some of which will clearly not work with few enough errors, or without so much “human validation” (a lovely phrase a good friend of mine actively involved in integrating AI systems into his manufacturing processes used recently) that they are not deemed practical. For early career workers entering these fields the demonstration of appropriate process knowledge, or the ability to develop it very quickly, may be the key to surviving the AI roller coaster they face over the next few years. Actionable skills and knowledge which allow them to manage such systems rather than being managed by them. To be a centaur rather than a reverse-centaur.

Not only will such skills make you less likely to lose your job to an AI system, they will also increase your value on the employment market: the harder these skills and knowledge are to acquire, the more valuable they are likely to be. But whereas in the past, in a more static market, merely passing your exams and learning coding might have been enough for an actuarial student for instance, the dynamic situation which sees everything that can be written down disappearing into prompts in some AI system will make such roles unprotected.

Instead it will be the knowledge about how people are likely to respond to what you say in a meeting or write in an email or report, and the skill to strategise around those things, knowing what to do when the rules run out, when situations are genuinely novel, ie putting yourself in someone else’s shoes and being prepared to make judgements. It will be the knowledge about what matters in a body of data, putting the pieces together in meaningful ways, and the skills to make that obvious to your audience. It will be the knowledge about what makes everyone in your team tick and the skills to use that knowledge to motivate them to do their best work. It will ultimately be about maintaining independent thought: the knowledge of why you are where you are and the skill to recognise what you can do for the people around you.

These have not always been seen as entry level skills and knowledge for graduates, but they are increasingly going to need to be as the requirement grows to plug you in further up an organisation if at all as that organisation pursues its diamond strategy or something similar. And alongside all this you will need a continuing professional self-development programme on steroids going on to fully understand the systems you are working with as quickly as possible and then understand them all over again when they get updated, demanding evidence and transparency and maintaining appropriate uncertainty when certainty would be more comfortable for the people around you, so that you can manage these systems into the areas where they can actually add value and out of the areas where they can cause devastation. It will be more challenging than transmitting the knowledge to build a temple out of hay and wood 20 years into the future, and will be continuous. Think of it as the Trigger’s Broom Process of Career Management if you like.

These will be essential roles for our economic future: to save these organisations from both themselves and their very expensive systems. It will be both enthralling and rewarding for those up to the challenge.

The warehouse at the end of Raiders of the Lost Ark

In the year when I was born, Malvina Reynolds recorded a song called Little Boxes when she was a year younger than I am now. If you haven’t heard it before, you can listen to it here. You might want to listen to it while you read the rest of this.

I remember the first time I felt panic during the pandemic. It was a couple of months in, we had been working very hard: to put our teaching processes online, consulting widely about appropriate remote assessments and getting agreement from the Institute and Faculty of Actuaries (IFoA) for our suggested approach at Leicester, checking in with our students, some of who had become very isolated as a result of lockdowns, and a million other things. I was just sitting at my kitchen table and suddenly I felt tears welling up and I was unable to speak without my voice breaking down. It happened at intervals after that, usually during a quiet moment when I, consciously or unconsciously, had a moment to reflect on the enormity of what was going on. I could never point to anything specific that triggered it, but I do know that it has been a permanent change about me, and that my emotions have been very much closer to the surface ever since. I felt something similar again this morning.

What is going on? Well I haven’t been able to answer that satisfactorily until now, but recently I read an article by David Runciman in the LRB from nine years ago when Donald Trump got elected POTUS the first time. I am not sure that everything in the article has withstood the test of time, but in it Runciman makes the case for Trump being the result of the people wanting “Trump to shake up a system that they also expected to shield them from the recklessness of a man like Trump.”. And this part looks prophetic:

[Trump is]…the bluntest of instruments, indiscriminately shaking the foundations with nothing to offer by way of support. Under these conditions, the likeliest response is for the grown-ups in the room to hunker down, waiting for the storm to pass. While they do, politics atrophies and necessary change is put off by the overriding imperative of avoiding systemic collapse. The understandable desire to keep the tanks off the streets and the cashpoints open gets in the way of tackling the long-term threats we face. Fake disruption followed by institutional paralysis, and all the while the real dangers continue to mount. Ultimately, that is how democracy ends.

And it suddenly hit me that this was something I had indeed taken for granted my whole life until the pandemic came along. The only thing that had ever looked like toppling society itself was the prospect of a nuclear war. Otherwise it seemed that our political system was hard to change and impossible to kill.

And then the pandemic came along and we saw government national and local digging mass graves and then filling them in again and setting aside vast arenas for people to die in before quietly closing them again. Rationing of food and other essentials was left to the supermarkets to administer, as were the massive snaking socially-distanced queues around their car parks. Seemingly arbitrary sets of rules suddenly started appearing at intervals about how and when we were allowed to leave the house and what we were allowed to do when out, and also how many people we could have in our houses and where they were allowed to come from. Most businesses were shut and their employees put on the government’s payroll. We learned which of us were key workers and spent a lot of time worrying about how we could protect the NHS, who we clapped every Thursday. It was hard to maintain the illusion that society still provided solid ground under our feet, particularly if we didn’t have jobs which could be moved online. Whoever you were you had to look down at some point, and I think now that I was having my Wile E. Coyote moment.

The trouble is, once you have looked down, it is hard to put that back in a box. At least I thought so, although there seems to have been a lot of putting things in boxes going on over the last few years. The UK Covid-19 Inquiry has made itself available online via a YouTube channel, but you might have thought that a Today at the Inquiry slot on terrestrial TV would have been more appropriate, not just covering it when famous people are attending. What we do know is that Patrick Vallance, Chief Scientific Advisor throughout the pandemic, has said that another pandemic is “absolutely inevitable” and that “we are not ready yet” for such an eventuality. Instead we have been busily shutting that particular box.

The biggest box of course is climate change. We have created a really big box for that called the IPCC. As the climate conferences migrate to ever more unapologetic petro-states, protestors are criminalised and imprisoned and emissions continue to rise, the box for this is doing a lot of work.

And then there are all the NHS boxes. As Roy Lilley notes:

If inquiries worked, we’d have the safest healthcare system in the world. Instead, we have a system addicted to investigating itself and forgetting the answers.

But perhaps the days of the box are numbered. The box Keir Starmer constructed to contain the anger about grooming gangs which the previous 7 year long box had been unable to completely envelop also now appears to be on the edge of collapse. And the Prime Minister himself was the one expressing outrage when a perfectly normal British box, versions of which had been giving authority to policing decisions since at least the Local Government (Review of Decisions) Act 2015 (although the original push to develop such systems stemmed from the Hillsborough and Heysel disasters in 1989 and 1985 respectively) suddenly didn’t make the decision he was obviously expecting. That box now appears to be heading for recycling if Reform UK come to power, which is, of course, rather difficult to do in Birmingham at the moment.

But what is the alternative to the boxes? At the moment it does not look like it involves confronting our problems any more directly. As Runciman reflected on the second Trump inauguration:

Poor Obama had to sit there on Monday and witness the mistaking of absolutism for principle and spectacle for politics. I don’t think Trump mistakes them – he doesn’t care enough to mind what passes for what. But the people in the audience who got up and applauded throughout his speech – as Biden and Harris and the Clintons and the Bushes remained glumly in their seats – have mistaken them. They think they will reap the rewards of what follows. But they will also pay the price.

David Allen Green’s recent post on BlueSky appears to summarise our position relative to that of the United States very well:

To Generation Z: a message of support from a Boomer

So you’ve worked your way through school and now university, developing the skills you were told would always be in high demand, credentialising yourself as a protection against the vagaries of the global economy. You may have serious doubts about ever being able to afford a house of your own, particularly if your area of work is very concentrated in London…

…and you resent the additional tax that your generation pays to support higher education:

Source: https://taxpolicy.org.uk/2023/09/24/70percent/

But you still had belief in being able to operate successfully within the graduate market.

A rational functional graduate job market should be assessing your skills and competencies against the desired attributes of those currently performing the role and making selections accordingly. That is a system both the companies and graduates can plan for.

It is very different from a Rush. The first phenomenon known as a Rush was the Californian Gold Rush of 1848-55. However the capitalist phenomenon of transforming an area to facilitate intensive production probably dates from sugar production in Madeira in the 15th century. There have been many since, but all neatly described by this Punch cartoon from 1849:

A Rush is a big deal. The Californian Gold Rush resulted in the creation of California, now the 5th largest economy in the world. But when it comes to employment, a Rush is not like an orderly jobs market. As Carlo Iacono describes, in an excellent article on the characteristics of the current AI Rush:

The railway mania of the 1840s bankrupted thousands of investors and destroyed hundreds of companies. It also left Britain with a national rail network that powered a century of industrial dominance. The fibre-optic boom of the late 1990s wiped out about $5 trillion in market value across the broader dot-com crash. It also wired the world for the internet age.

A Rush is a difficult and unpredictable place to build a career, with a lot riding on dumb luck as much as any personal characteristics you might have. There is very little you can count on in a Rush. This one is even less predictable because as Carlo also points out:

When the railway bubble burst in the 1840s, the steel tracks remained. When the fibre-optic bubble burst in 2001, the “dark fibre” buried in the ground was still there, ready to carry traffic for decades. These crashes were painful, but they left behind durable infrastructure that society could repurpose.

Whereas the 40–60% of US real GDP growth in the first half of 2025 explained by investment in AI infrastructure isn’t like that:

The core assets are GPUs with short economic half-lives: in practice, they’re depreciated over ~3–5 years, and architectures are turning over faster (Hopper to Blackwell in roughly two years). Data centres filled with current-generation chips aren’t valuable, salvageable infrastructure when the bubble bursts. They’re warehouses full of rapidly depreciating silicon.

So today’s graduates are certainly going to need resilience, but that’s just what their future employers are requiring of them. They also need to build their own support structures which are going to see them through the massive disruption which is coming whether or not the enormous bet on AI is successful or not. The battle to be centaurs, rather than reverse-centaurs, as I set out in my last post (or as Carlo Iacono describes beautifully in his discussion of the legacy of the Luddites here), requires these alliances. To stop thinking of yourselves as being in competition with each other and start thinking of yourselves as being in competition for resources with my generation.

I remember when I first realised my generation (late Boomer, just before Generation X) was now making the weather. I had just sat a 304 Pensions and Other Benefits actuarial exam in London (now SP4 – unsuccessfully as it turned out), and nipped in to a matinee of Sam Mendes’ American Beauty and watched the plastic bag scene. I was 37 at the time.

My feeling is that despite our increasingly strident efforts to do so, our generation is now deservedly losing power and is trying to hang on by making reverse centaurs of your generation as a last ditch attempt to remain in control. It is like the scene in another movie, Triangle of Sadness, where the elite are swept onto a desert island and expect the servant who is the only one with survival skills in such an environment to carry on being their servant.

Don’t fall for it. My advice to young professionals is pretty much the same as it was to actuarial students last year on the launch of chartered actuary status:

If you are planning to join a profession to make a positive difference in the world, and that is in my view the best reason to do so, then you are going to have to shake a few things up along the way.

Perhaps there is a type of business you think the world is crying out for but it doesn’t know it yet because it doesn’t exist. Start one.

Perhaps there is an obvious skill set to run alongside your professional one which most of your fellow professionals haven’t realised would turbo-charge the effectiveness of both. Acquire it.

Perhaps your company has a client who noone has taken the time to put themselves in their shoes and communicate in a way they will properly understand and value. Be that person.

Or perhaps there are existing businesses who are struggling to manage their way in changing markets and need someone who can make sense of the data which is telling them this. Be that person.

All why remaining grounded in which ever community you have chosen for yourself. Be the member of your organisation or community who makes it better by being there.

None of these are reverse centaur positions. Don’t settle for anything less. This is your time.

In 2017, I was rather excitedly reporting about ideas which were new to me at the time regarding how technology or, as Richard and Daniel Susskind referred to it in The Future of the Professions, “increasingly capable machines” were going to affect professional work. I concluded that piece as follows:

The actuarial profession and the higher education sector therefore need each other. We need to develop actuaries of the future coming into your firms to have:

  • great team working skills
  • highly developed presentation skills, both in writing and in speech
  • strong IT skills
  • clarity about why they are there and the desire to use their skills to solve problems

All within a system which is possible to regulate in a meaningful way. Developing such people for the actuarial profession will need to be a priority in the next few years.

While all of those things are clearly still needed, it is becoming increasingly clear to me now that they will not be enough to secure a job as industry leaders double down.

Source: https://www.ft.com/content/99b6acb7-a079-4f57-a7bd-8317c1fbb728

And perhaps even worse than the threat of not getting a job immediately following graduation is the threat of becoming a reverse-centaur. As Cory Doctorow explains the term:

A centaur is a human being who is assisted by a machine that does some onerous task (like transcribing 40 hours of podcasts). A reverse-centaur is a machine that is assisted by a human being, who is expected to work at the machine’s pace.

We have known about reverse-centaurs since at least Charlie Chaplin’s Modern Times in 1936.

By Charlie Chaplin – YouTube, Public Domain, https://commons.wikimedia.org/w/index.php?curid=68516472

Think Amazon driver or worker in a fulfillment centre, sure, but now also think of highly competitive and well-paid but still ultimately human-in-the-loop kinds of roles being responsible for AI systems designed to produce output where errors are hard to spot and therefore to stop. In the latter role you are the human scapegoat, in the phrasing of Dan Davies, “an accountability sink” or in that of Madeleine Clare Elish, a “moral crumple zone” all rolled into one. This is not where you want to be as an early career professional.

So how to avoid this outcome? Well obviously if you have other options to roles where a reverse-centaur situation is unavoidable you should take them. Questions to ask at interview to identify whether the role is irretrievably reverse-centauresque would be of the following sort:

  1. How big a team would I be working in? (This might not identify a reverse-centaur role on its own: you might be one of a bank of reverse-centaurs all working in parallel and identified “as a team” while in reality having little interaction with each other).
  2. What would a typical day be in the role? This should smoke it out unless the smokescreen they put up obscures it. If you don’t understand the first answer, follow up to get specifics.
  3. Who would I report to? Get to meet them if possible. Establish whether they are technical expert in the field you will be working in. If they aren’t, that means you are!
  4. Speak to someone who has previously held the role if possible. Although bear in mind that, if it is a true reverse-centaur role and their progress to an actual centaur role is contingent on you taking this one, they may not be completely forthcoming about all of the details.

If you have been successful in a highly competitive recruitment process, you may have a little bit of leverage before you sign the contract, so if there are aspects which you think still need clarifying, then that is the time to do so. If you recognise some reverse-centauresque elements from your questioning above, but you think the company may be amenable, then negotiate. Once you are in, you will understand a lot more about the nature of the role of course, but without threatening to leave (which is as damaging to you as an early career professional as it is to them) you may have limited negotiation options at that stage.

In order to do this successfully, self knowledge will be key. It is that point from 2017:

  • clarity about why they are there and the desire to use their skills to solve problems

To that word skills I would now add “capabilities” in the sense used in a wonderful essay on this subject by Carlo Iacono called Teach Judgement, Not Prompts.

You still need the skills. So, for example, if you are going into roles where AI systems are producing code, you need to have sufficiently good coding skills yourself to create a programme to check code written by the AI system. If the AI system is producing communications, your own communication skills need to go beyond producing work that communicates to an audience effectively to the next level where you understand what it is about your own communication that achieves that, what is necessary, what is unnecessary, what gets in the way of effective communication, ie all of the things that the AI system is likely to get wrong. Then you have a template against which to assess the output from an AI system, and for designing better prompts.

However specific skills and tools come and go, so you need to develop something more durable alongside them. Carlo has set out four “capabilities” as follows:

  1. Epistemic rigour, which is being very disciplined about challenging what we actually know in any given situation. You need to be able to spot when AI output is over-confident given the evidence, or when a correlation is presented as causation. What my tutors used to refer to as “hand waving”.
  2. Synthesis is about integrating different perspectives into an overall understanding. Making connections between seemingly unrelated areas is something AI systems are generally less good at than analysis.
  3. Judgement is knowing what to do in a new situation, beyond obvious precedent. You get to develop judgement by making decisions under uncertainty, receiving feedback, and refining your internal models.
  4. Cognitive sovereignty is all about maintaining your independence of thought when considering AI-generated content. Knowing when to accept AI outputs and when not to.

All of these capabilities can be developed with reflective practice, getting feedback and refining your approach. As Carlo says:

These capabilities don’t just help someone work with AI. They make someone worth augmenting in the first place.

In other words, if you can demonstrate these capabilities, companies who themselves are dealing with huge uncertainty about how much value they are getting from their AI systems and what they can safely be used for will find you an attractive and reassuring hire. Then you will be the centaur, using the increasingly capable systems to improve your own and their productivity while remaining in overall control of the process, rather than a reverse-centaur for which none of that is true.

One sure sign that you are straying into reverse-centaur territory is when a disproportionate amount of your time is spent on pattern recognition (eg basing an email/piece of coding/valuation report on an earlier email/piece of coding/valuation report dealing with a similar problem). That approach was always predicated on being able to interact with a more experienced human who understood what was involved in the task at some peer review stage. But it falls apart when there is no human to discuss the earlier piece of work with, because the human no longer works there, or a human didn’t produce the earlier piece of work. The fake it until you make it approach is not going to work in environments like these where you are more likely to fake it until you break it. And pattern recognition is something an AI system will always be able to do much better and faster than you.

Instead, question everything using the capabilities you have developed. If you are going to be put into potentially compromising situations in terms of the responsibilities you are implicitly taking on, the decisions needing to be made and the limitations of the available knowledge and assumptions on which those decisions will need to be based, then this needs to be made explicit, to yourself and the people you are working with. Clarity will help the company which is trying to use these new tools in a responsible way as much as it helps you. Learning is going to be happening for them as much as it is for you here in this new landscape.

And if the company doesn’t want to have these discussions or allow you to hamper the “efficiency” of their processes by trying to regulate them effectively? Then you should leave as soon as you possibly can professionally and certainly before you become their moral crumple zone. No job is worth the loss of your professional reputation at the start of your career – these are the risks companies used to protect their senior people of the future from, and companies that are not doing this are clearly not thinking about the future at all. Which is likely to mean that they won’t have one.

To return to Cory Doctorow:

Science fiction’s superpower isn’t thinking up new technologies – it’s thinking up new social arrangements for technology. What the gadget does is nowhere near as important as who the gadget does it for and who it does it to.

You are going to have to be the generation who works these things out first for these new AI tools. And you will be reshaping the industrial landscape for future generations by doing so.

And the job of the university and further education sectors will increasingly be to equip you with both the skills and the capabilities to manage this process, whatever your course title.

In 2017 I posted an article about how the future for actuaries was starting to look, with particular reference to a Society of Actuaries paper by Dodzi Attimu and Bryon Robidoux, which has since been moved to here.

I summarised their paper as follows at the time:

Focusing on…a paper produced by Dodzi Attimu and Bryon Robidoux for the Society of Actuaries in July 2016 explored the theme of robo actuaries, by which they meant software that can perform the role of an actuary. They went on to elaborate as follows:

Though many actuaries would agree certain tasks can and should be automated, we are talking about more than that here. We mean a software system that can more or less autonomously perform the following activities: develop products, set assumptions, build models based on product and general risk specifications, develop and recommend investment and hedging strategies, generate memos to senior management, etc.

They then went on to define a robo actuarial analyst as:

A system that has limited cognitive abilities but can undertake specialized activities, e.g. perform the heavy lifting in model building (once the specification/configuration is created), perform portfolio optimization, generate reports including narratives (e.g. memos) based on data analysis, etc. When it comes to introducing AI to the actuarial profession, we believe the robo actuarial analyst would constitute the first wave and the robo actuary the second wave.

They estimate that the first wave is 5 to 10 years away and the second 15 to 20 years away. We have been warned.

So 9 years on from their paper, how are things looking? Well the robo actuarial analyst wave certainly seems to be pretty much here, particularly now that large language models like ChatGPT are being increasingly used to generate reports. It suddenly looks a lot less fanciful to assume that the full robo actuary is less than 11 years away.

But now the debate on AI appears to be shifting to an argument between whether we are heading for Vernor Vinge’s “Singularity” where the increasingly capable systems

would not be humankind’s “tool” — any more than humans are the tools of rabbits or robins or chimpanzees

on the one hand, and, on the other, the idea that “it is going to take a long time for us to really use AI properly…, because of how hard it is to regear processes and organizations around new tech”.

In his article on Understanding AI as a social technology, Henry Farrell suggests that neither of these positions allow a proper understanding of the impact AI is likely to have, instead proposing the really interesting idea that we are already part way through a “slow singularity”, which began with the industrial revolution. As he puts it:

Under this understanding, great technological changes and great social changes are inseparable from each other. The reason why implementing normal technology is that so slow is that it requires sometimes profound social and economic transformations, and involves enormous political struggle over which kinds of transformation ought happen, which ought not, and to whose benefit.

This chimes with what I was saying recently about AI possibly not being the best place to look for the next industrial revolution. Farrell plausibly describes the current period using the words of Herbert Simon. As Farrell says: “Human beings have quite limited internal ability to process information, and confront an unpredictable and complex world. Hence, they rely on a variety of external arrangements that do much of their information processing for them.” So Simon says of markets, for instance, which:

appear to conserve information and calculation by assigning decisions to actors who can make them on the basis of information that is available to them locally – that is, without knowing much about the rest of the economy apart from the prices and properties of the goods they are purchasing and the costs of the goods they are producing.

And bureaucracies and business organisations, similarly:

like markets, are vast distributed computers whose decision processes are substantially decentralized. … [although none] of the theories of optimality in resource allocation that are provable for ideal competitive markets can be proved for hierarchy, … this does not mean that real organizations operate inefficiently as compared to real markets. … Uncertainty often persuades social systems to use hierarchy rather than markets in making decisions.

Large language models by this analysis are then just another form of complex information processing, “likely to reshape the ways in which human beings construct shared knowledge and act upon it, with their own particular advantages and disadvantages. However, they act on different kinds of knowledge than markets and hierarchies”. As an Economist article Farrell co-wrote with Cosma Shalizi says:

We now have a technology that does for written and pictured culture what largescale markets do for the economy, what large-scale bureaucracy does for society, and perhaps even comparable with what print once did for language. What happens next?

Some suggestions follow and I strongly recommend you read the whole thing. However, if we return to what I and others were saying in 2016 and 2017, it may be that we were asking the wrong question. Perhaps the big changes of behaviour required of us to operate as economic beings have already happened (the start of the “slow singularity” of the industrial revolution) and the removal of alternatives that required us to spend increasing proportions of our time within and interacting with bureacracies and other large organisations were the logical appendage to that process. These processes are merely becoming more advanced rather than changing fundamentally in form.

And the third part, ie language? What started with the emergence of Late Modern English in the 1800s looks like it is now being accelerated via a new way of complex information processing applied to written, pictured (and I would say also heard) culture.

So the future then becomes something not driven by technology, but by our decisions about which processes we want to allow or even encourage and which we don’t, whether those are market processes, organisational processes or large language processes. We don’t have to have robo actuaries or even robo actuarial analysts, but we do have to make some decisions.

And students entering this arena need to prepare themselves to be participants in those decisions rather than just victims of them. A subject I will be returning to.

Title page vignette of Hard Times by Charles Dickens. Thomas Gradgrind Apprehends His Children Louisa and Tom at the Circus, 1870

It was Fredric Jameson (according to Owen Hatherley in the New Statesman) who first said:

“It seems to be easier for us today to imagine the thoroughgoing deterioration of the earth and of nature than the breakdown of late capitalism”. I was reminded of this by my reading this week.

It all started when I began watching Shifty, Adam Curtis’ latest set of films on iPlayer aiming to convey a sense of shifting power structures and where they might lead. Alongside the startling revelation that The Land of Make Believe by Bucks Fizz was written as an anti-Thatcher protest song, there was a short clip of Eric Hobsbawm talking about all of the words which needed to be invented in the late 18th century and early 19th to allow people to discuss the rise of capitalism and its implications. So I picked up a copy of his The Age of Revolution 1789-1848 to look into this a little further.

The first chapter of Hobsbawm’s introduction from 1962, the year of my birth, expanded on the list:

Words are witnesses which often speak louder than documents. Let us consider a few English words which were invented, or gained their modern meanings, substantially in the period of sixty years with
which this volume deals. They are such words as ‘industry’, ‘industrialist’, ‘factory’, ‘middle class’, ‘working class’, ‘capitalism’ and ‘socialism’. They include ‘aristocracy’ as well as ‘railway’, ‘liberal’ and
‘conservative’ as political terms, ‘nationality’, ‘scientist’ and ‘engineer’, ‘proletariat’ and (economic) ‘crisis’. ‘Utilitarian’ and ‘statistics’, ‘sociology’ and several other names of modern sciences, ‘journalism’ and ‘ideology’, are all coinages or adaptations of this period. So is ‘strike’ and ‘pauperism’.

What is striking about these words is how they frame most of our economic and political discussions still. The term “middle class” originated in 1812. Noone referred to an “industrial revolution” until English and French socialists did in the 1820s, despite what it described having been in progress since at least the 1780s.

Today the founder of the World Economic Forum has coined the phrase “Fourth Industrial Revolution” or 4IR or Industry 4.0 for those who prefer something snappier. Its blurb is positively messianic:

The Fourth Industrial Revolution represents a fundamental change in the way we live, work and relate to one another. It is a new chapter in human development, enabled by extraordinary technology advances commensurate with those of the first, second and third industrial revolutions. These advances are merging the physical, digital and biological worlds in ways that create both huge promise and potential peril. The speed, breadth and depth of this revolution is forcing us to rethink how countries develop, how organisations create value and even what it means to be human. The Fourth Industrial Revolution is about more than just technology-driven change; it is an opportunity to help everyone, including leaders, policy-makers and people from all income groups and nations, to harness converging technologies in order to create an inclusive, human-centred future. The real opportunity is to look beyond technology, and find ways to give the greatest number of people the ability to positively impact their families, organisations and communities.

Note that, despite the slight concession in the last couple of sentences that an industrial revolution is about more then technology-driven change, they are clear that the technology is the main thing. It is also confused: is the future they see one in which “technology advances merge the physical, digital and biological worlds” to such an extent that we have “to rethink” what it “means to be human”? Or are we creating an “inclusive, human-centred future”?

Hobsbawm describes why utilitarianism (” the greatest happiness of the greatest number”) never really took off amongst the newly created middle class, who rejected Hobbes in favour of Locke because “he at least put private property beyond the range of interference and attack as the most basic of ‘natural rights'”, whereas Hobbes would have seen it as just another form of utility. This then led to this natural order of property ownership being woven into the reassuring (for property owners) political economy of Adam Smith and the natural social order arising from “sovereign individuals of a certain psychological constitution pursuing their self-interest in competition with one another”. This was of course the underpinning theory of capitalism.

Hobsbawm then describes the society of Britain in the 1840s in the following terms:

A pietistic protestantism, rigid, self-righteous, unintellectual, obsessed with puritan morality to the point where hypocrisy was its automatic companion, dominated this desolate epoch.

In 1851 access to the professions in Britain was extremely limited, requiring long years of education to support oneself through and opportunities to do so which were rare. There were 16,000 lawyers (not counting judges) but only 1,700 law students. There were 17,000 physicians and surgeons and 3,500 medical students and assistants. The UK population in 1851 was around 27 million. Compare these numbers to the relatively tiny actuarial profession in the UK today, with around 19,000 members overall in the UK.

The only real opening to the professions for many was therefore teaching. In Britain “76,000 men and women in 1851 described themselves as schoolmasters/mistresses or general teachers, not to mention the 20,000 or so governesses, the well-known last resource of penniless educated girls unable or unwilling to earn their living in less respectable ways”.

Admittedly most professions were only just establishing themselves in the 1840s. My own, despite actuarial activity getting off the ground in earnest with Edmund Halley’s demonstration of how the terms of the English Government’s life annuities issue of 1692 were more generous than it realised, did not form the Institute of Actuaries (now part of the Institute and Faculty of Actuaries) until 1848. The Pharmaceutical Society of Great Britain (now the Royal Pharmaceutical Society) was formed in 1841. The Royal College of Veterinary Surgeons was established by royal charter in 1844. The Royal Institute of British Architects (RIBA) was founded in 1834. The Society of Telegraph Engineers, later the Institute of Electrical Engineers (now part of the Institute of Engineering and Technology), was formed in 1871. The Edinburgh Society of Accountants and the Glasgow Institute of Accountants and Actuaries were granted royal charters in the mid 1850s, before England’s various accounting institutes merged into the Institute of Chartered Accountants in England and Wales in 1880.

However “for every man who moved up into the business classes, a greater number necessarily moved down. In the second place economic independence required technical qualifications, attitudes of mind, or financial resources (however modest) which were simply not in the possession of most men and women.” As Hobsbawm goes on to say, it was a system which:

…trod the unvirtuous, the weak, the sinful (i.e. those who neither made money nor controlled their emotional or financial expenditures) into the mud where they so plainly belonged, deserving at best only of their betters’ charity. There was some capitalist economic sense in this. Small entrepreneurs had to plough back much of their profits into the business if they were to become big entrepreneurs. The masses of new proletarians had to be broken into the industrial rhythm of labour by the most draconic labour discipline, or left to rot if they would not accept it. And yet even today the heart contracts at the sight of the landscape constructed by that generation.

This was the landscape upon which the professions alongside much else of our modern world were constructed. The industrial revolution is often presented in a way that suggests that technical innovations were its main driver, but Hobsbawm shows us that this was not so. As he says:

Fortunately few intellectual refinements were necessary to make the Industrial Revolution. Its technical inventions were exceedingly modest, and in no way beyond the scope of intelligent artisans experimenting in their workshops, or of the constructive capacities of carpenters, millwrights and locksmiths: the flying shuttle, the spinning jenny, the mule. Even its scientifically most sophisticated machine, James Watt’s rotary steam-engine (1784), required no more physics than had been available for the best part of a century—the proper theory of steam engines was only developed ex post facto by the Frenchman Carnot in the 1820s—and could build on several generations of practical employment for steam engines, mostly in mines.

What it did require though was the obliteration of alternatives for the vast majority of people to “the industrial rhythm of labour” and a radical reinvention of the language.

These are not easy things to accomplish which is why we cannot easily imagine the breakdown of late capitalism. However if we focus on AI etc as the drivers of the next industrial revolution, we will probably be missing where the action really is.

In a previous post, I mentioned the “diamond model” that accountancy firms are reportedly starting to talk about. The impact so far looks pretty devastating for graduates seeking work:

And then by industry:

Meanwhile, Microsoft have recently produced a report into the occupational implications of generative AI and their top 40 vulnerable roles looks like this (look at where data scientist, mathematician and management analyst sit – all noticeably more replaceable by AI than model which caused all the headlines when Vogue did it last week):

So this looks like a process well underway rather than a theoretical one for the future. But I want to imagine a few years ahead. Imagine that this process has continued to gut what we now regard as entry level jobs and that the warning of Dario Amodei, CEO of AI company Anthropic, that half of “administrative, managerial and tech jobs for people under 30” could be gone in 5 years, has come to pass. What then?

Well this is where it gets interesting (for some excellent speculative fiction about this, the short story Human Resources and novel Service Model by Adrian Tchaikovsky will certainly give you something to think about), because there will still be a much smaller number of jobs in these roles. They will be very competitive. Perhaps we will see FBI kind of recruitment processes becoming more common for the rarified few, probably administered by the increasingly capable systems I discuss below. They will be paid a lot more. However, as Cory Doctorow describes here, the misery of being the human in the loop for an AI system designed to produce output where errors are hard to spot and therefore to stop (Doctorow calls them, “reverse centaurs”, ie humans have become the horse part) includes being the ready made scapegoat (or “moral crumple zone” or “accountability sink“) for when they are inevitably used to overreach what they are programmed for and produce something terrible. The AI system is no longer working for you as some “second brain”. You are working for it, but no company is going to blame the very expensive AI system that they have invested in when there is a convenient and easily-replaceable (remember how hard these jobs will be to get) human candidate to take the fall. And it will be assumed that people will still do these jobs, reasoning that it is the only route to highly paid and more secure jobs later, or that they will be able to retire at 40, as the aspiring Masters of the Universe (the phrase coined by Tom Wolfe in The Bonfire of the Vanities) in the City of London have been telling themselves since the 1980s, only this time surrounded by robot valets no doubt.

But a model where all the gains go to people from one, older, generation at the expense of another, younger, generation depends on there being reasonable future prospects for that younger generation or some other means of coercing them.

In their book, The Future of the Professions, Daniel and Richard Susskind talk about the grand bargain. It is a form of contract, but, as they admit:

The grand bargain has never formally been reduced to writing and signed, its terms have never been unambiguously and exhaustively articulated, and noone has actually consented expressly to the full set of rights and obligations that it seems to lay down.

Atul Gawande memorably expressed the grand bargain for the medical profession (in Better) as follows:

The public has granted us extraordinary and exclusive dispensation to administer drugs to people, even to the point of unconsciousness, to cut them open, to do what would otherwise be considered assault, because we do so on their behalf – to save their lives and provide them comfort.

The Susskinds questioned (in 2015) whether this grand bargain could survive a future of “increasingly capable systems” and suggested a future when all 7 of the following models were in use:

  1. The traditional model, ie the grand bargain as it works now. Human professionals providing their services face-to-face on a time-cost basis.
  2. The networked experts model. Specialists work together via online networks. BetterDoctor would be an example of this.
  3. The para-professional model. The para-professional has had less training than the traditional professional but is equipped by their training and support systems to deliver work independently within agreed limits. The medical profession’s battle with this model has recently given rise to the Leng Review.
  4. The knowledge engineering model. A system is made available to users, including a database of specialist knowledge and the modelling of specialist expertise based on experience in a form that makes it accessible to users. Think tax return preparation software or medical self-diagnosis online tools.
  5. The communities of experience model, eg Wikipedia.
  6. The embedded knowledge model. Practical expertise built into systems or physical objects, eg intelligent buildings which have sensors and systems that test and regulate the internal environment of a building.
  7. The machine-generated model. Here practical expertise is originated by machines rather than by people. This book was written in 2015 so the authors did not know about large language models then, but these would be an obvious example.

What all of these alternative models had in common of course was the potential to no longer need the future traditional model professional.

There is another contract which has never been written down: that between the young and the old in society. Companies are jumping the gun on how the grand bargain is likely to be re-framed and adopting systems before all of the evidence is in. As Doctorow said in March (ostensibly about Musk’s DOGE when it was in full firing mode):

AI can’t do your job, but an AI salesman (Elon Musk) can convince your boss (the USA) to fire you and replace you (a federal worker) with a chatbot that can’t do your job

What strikes me is that the boss in question is generally at least 55. As one consultancy has noted:

Notably, the youngest Baby Boomers turned 60 in 2024—the average age of senior leadership in the UK, particularly for non-executive directors. Executive board directors tend to be slightly younger, averaging around 55.

Assume there was some kind of written contract between young and old that gave the older generation the responsibility to be custodian of all of the benefits of living in a civilised society while they were in positions of power so that life was at least as good for the younger generation when they succeeded them.

Every time a Baby Boomer argues that the state pension age increases because “we” cannot afford it, he or she is arguing both for the worker who will then be paying for his or her pension to continue to do so and that they should accept a delay in when they will get their quid pro quo, with no risk that the changes will be applied to the Boomer as all changes are flagged many years in advance. That contract would clearly be in breach. Every Boomer graduate from more than 35 years ago who argues for the cost of student loans to increase when they never paid for theirs would break such a contract. Every Boomer homeowner who argues against any measure which might moderate the house price inflation which they benefit from in increased equity would break such a contract. And of course any such contract worth its name would require strenuous efforts to limit climate change.

And a Boomer who removes a graduate job to temporarily support their share price (so-called rightsizing) in favour of a necessarily not-yet-fully-tested (by which I mean more than testing the software but also all of the complicated network of relationships required to make any business operate successfully) system then the impact of that temporary inflation of the share price on executive bonuses is being valued much more highly than both the future of the business and of the generation that will be needed to run it.

This is not embracing the future so much as selling a futures contract before setting fire to the actual future. And that is not a contract so much as an abusive relationship between the generations.

https://parliament.assetbank-server.com/assetbank-parliament/images/assetbox/b26cd8f5-538e-4409-b033-f1f02aea6821/assetbox.html

Milan Kundera wrote his The Book of Laughter and Forgetting in 1979, a few years after moving to France and the same year he had his Czech citizenship revoked. His books had all been banned in Czechoslovakia in 1968, as most of them poked fun at the regime in one way or the other. The Book of Laughter and Forgetting was no exception, focusing, via seven stories, on what we choose to forget in history, politics and our own lives. One of the themes is a word which is difficult to translate into English: litost.

Litost seems to mean an emotional state of feeling of being on your own suddenly brought face to face with how obvious your own hopelessness is. Or something to that effect. Kundera explored several aspects of litost at length in the novel. However, for all the difficulties of describing it exactly, litost feels like a useful word for our times, our politics and our economics.

I want to focus on two specific examples of forgetting and the sudden incidents of litost which have brought them back into focus.

The first, although not chronologically, would be the pandemic. There are several articles around suddenly about the lessons we have not learnt from the pandemic, to mark the fifth anniversary of the first lockdown. Christina Pagel, backed up by module 1 of the Covid-19 Inquiry, reckons:

Preventing future lockdowns requires planning, preparation, investment in public health infrastructure, and investment in testing, virology and medical research

She takes issue with some of the commentary as follows:

But the tenor of reporting and public opinion seems to be that “lockdowns were terrible and so we must not have lockdowns again”. This is the wrong lesson. Lockdowns are terrible but so are unchecked deadly pandemics. The question should be “lockdowns were terrible, so how can we prevent the spread of a new pandemic so we never need one again?”.

However the stampede to get back to “normal” has mitigated against investing in infrastructure and led to a massive reduction in testing and reporting, and the Covid-19 Inquiry has given the government cover (all questions can just be responded to by saying that the Covid Inquiry is still looking at what happened) to actively forget it as quickly as possible. Meanwhile the final module of the Covid-19 Inquiry is not due to conclude until early 2026, which one must hope is before the next pandemic hits. For which, as the former Chief Scientific Adviser and other leading experts have said, we are not remotely prepared, and certainly no better prepared than we were in 2020.

It is tempting to think that this is the first major recent instance involving the forgetting of a crisis to the extent that its repetition would be just as devastating the second time. Which is perhaps a sign of how complete our collective amnesia about 2008 has become.

Make no mistake, 2008 was a complete meltdown of the core of our financial system. People I know who were working in banks at the time described how even the most experienced people around them had no idea what to do. Alistair Darling, Chancellor of the Exchequer at the time, claimed we were hours away from a “breakdown in law and order”.

According to the Commons Library briefing note from October 2018, the Office for Budget Responsibility (OBR) estimates that, as at the end of January 2018, the interventions had cost the public £23 billion overall. The net balance is the result of a £27 billion loss on the RBS rescue, offset by some net gains on other schemes. Total support in cash and guarantees added up to almost £1.2 trillion, including the nationalisation of Northern Rock (purchased by Virgin Money, which has since been acquired by the Nationwide Building Society) and the Bradford & Bingley (sold to Santander) and major stakes in RBS (now NatWest) and Lloyds. Peak government ownership in these banks is shown below:

If you read the Bank of England wacky timeline 10 years on from 2018, you will see a lot about how prepared they are to fight the last war again. As a result of this, cover has been given to actively forget 2008 as quickly as possible.

Except now various people are arguing that the risks of the next financial crisis are increasing again. The FT reported in January on the IMF’s warnings (from their Global Financial Stability Report from April 2024) about the rise in private credit bringing systemic risks.

Meanwhile Steve Keen (one of the very few who actually predicted the 2008 crisis) in his latest work Money and Macroeconomics from First Principles, for Elon Musk and Other Engineers has a whole chapter devoted to triggering crises by reducing government debt, which makes the following point:

A serious crisis, triggered by a private debt bubble and crash, has followed every sustained attempt to reduce government debt. This can be seen by comparing data on government and private debt back to 1834.

(By the way, Steve Keen is running a webinar for the Institute and Faculty of Actuaries entitled Why actuaries need a new economics on Friday 4 April which I thoroughly recommend if you are interested)

Which brings us to the Spring Statement, which was about (yes, you’ve guessed it!) reducing government debt (or the new formulation of this “increasing OBR headroom”) and boosting GDP growth. Watching the Chief Secretary to the Treasury, Darren Jones, and Paul Johnson from the IFS nodding along together in the BBC interviews immediately afterwards, you realised how the idea of allowing the OBR to set policy has taken hold. Johnson’s only complaint seemed to be that they appeared to be targeting headroom to the decimal point over other considerations.

I have already written about the insanity of making OBR forecasts the source of your hard spending limits in government. The backdrop to this Statement was already bad enough. As Citizens Advice have said, people’s financial resilience has never been lower.

But aside from the callousness of it all, it does not even make sense economically. The OBR have rewarded the government for sticking to them so closely by halving their GDP growth projections and, in the absence of any new taxes, it seems as if disabled people are being expected to do a lot of the heavy lifting by 2029-30:

Part of this is predicated on throwing 400,000 people off Personal Independence Payments (PIPs) by 2029-30. According to the FT:

About 250,000 people, including 50,000 children, will be pushed into relative poverty by the cuts, according to a government impact assessment.

As Roy Lilley says:

We are left standing. Abandoned, to watch the idiocy of what’s lost… the security, human dignity and wellbeing of our fellow man, woman and their family… everything that matters.

As an exercise in fighting the last war, or, according to Steve Keen, the wars successive governments have been fighting since 1834, it takes some beating. It was litost on steroids for millions of people.

So what does the government think these people are going to fill the income gap with? It will be private debt of course. And for those in poverty, the terms are not good (eg New Horizons has a representative APR of 49% with rates between 9.3% APR and maximum 1,721% APR).

And for those who can currently afford a mortgage (from page 47 of the OBR report):

Average interest rates on the stock of mortgages are expected to rise from around 3.7 per cent in 2024 to a peak of 4.7 per cent in 2028, then stay around that level until the end of the forecast. The high proportion of fixed-rate mortgages (around 85 per cent) means increases in Bank Rate feed through slowly to the stock of mortgages. The Bank of England estimates around one-third of those on fixed rate mortgages have not refixed since rates started to rise in mid-2021, so the full impact of higher interest rates has not yet been passed on.

So, even before considering the future tax increases the FT appears to be expecting, the levels of private debt look like they will shoot up very quickly. And we all know (excluding the government it seems) where that leads…