The Actuary magazine recently had a debate about whether the underlying data or the story you wove around it was more important. I’m not sure there is always a clear distinction between the two, as Dan Davies rather neatly illustrates here, but my view is that, if a binary choice has to be made, it is always going to be the story. And there was a great example of this which popped up recently in the FT.

The FT article was ‘Is university still worth it?’ is the wrong question, by John Burn-Murdoch, with great graphs as usual by John. However, as is sometimes the case, I feel that a very different and more convincing story could be wrapped around the same datasets he is showing us.

The article’s thesis is as follows:

The graduate earnings premium, ie how much more on average graduates earn than non-graduates, has only fallen in the UK as the proportion going to university has risen. It has risen in other countries:

In the UK, we have had much weaker productivity growth than the other comparator countries, and also “the steady ramping up of the minimum wage has squeezed the earnings premium from the lower end too”:

We have also had a much smaller increase in the percentage of managerial and professional jobs than a different group of comparator countries (they haven’t mentioned Germany before), meaning graduates are forced to take lower salaried jobs elsewhere:

So the answer according to the FT? We should focus on economic growth rather than “tweaking” higher education intake and funding. Then graduate earnings would be higher, student loans could be more generous(!) and students would have more chance of getting a good job.

Well perhaps. But here’s a different framing of the same data that I find more persuasive.

Let’s start by addressing that point about the minimum wage. According to the House of Commons Library report on this, the UK’s minimum wage is broadly comparable to that of France and the Netherlands, although higher than Canada’s and much higher than that of the United States. The employers who are the FT’s constituency would obviously like us lower down this particular chart:

The main economic framing here is the progress myth of the UK’s business community: economic growth. All problems can be solved if we can just get more economic growth. Apparently we need more inequality in pay between graduates and non-graduates which we can get by generating more economic growth. This is honest of them at least, although I don’t see much evidence that the economic growth they crave will go into skilled job creation rather than stock buy backs (according to Motley Fool, “Companies spent $249 billion on stock buybacks in Q3 2025, and $777 billion over the first three quarters of 2025.”).

There are a lot of problems with framing every economic question with respect to economic growth, memorably illustrated by Zack Polanski of the Green Party in this less than 3 minute video recently (I strongly recommend you watch it before you read on – click on the read in browser link if you can’t see it):

Economic growth is increasingly without purpose, wasteful of energy and poorly distributed. It is chasing outputs, literally any outputs, whatever the cost to the environment, our health system, our education system, our social support systems and our communities. Looking at the framing above, you can see that economic growth as currently pursued will always see anything which stops the concentration of wealth amongst the already wealthy, like a higher national minimum wage or a totally made-up concept like a lower graduate earnings premium (which in itself is a framing trying to make reducing inequality seem undesirable) as a problem. Lack of productivity growth, itself a proxy for this kind of economic growth (because if you ask why we need more productivity the answer is always to get more economic growth), is usually directed as a criticism at “lazy” UK workers, rather than under-investing and over-extracting UK business owners.

But what if, instead of economic growth, your progress myth was reducing inequality? Or growing equality within the economy?

Source: World Inequality Database wid.world

If you focused on inequality rather than economic growth, then you would find it correlates with everything we say we don’t want. Unlike economic growth, having equality as an aim actually has the advantage of having an evidence base for the claim that it improves society:

Source: https://media.equality-trust.out.re/uploads/2024/07/The-Spirit-Level-at-15-2024-FINAL.pdf

If you focused on inequality, then you would be pleased that we have had an increase in our minimum wage. You would think that the same FT article’s admission that UK graduates’ skills levels are higher than those in the United States was more important than something called a graduate earnings premium.

Burn-Murdoch is right to say asking whether university is worth it is the wrong question.

However economic growth is the wrong answer.

And I thought I would probably be stopping there for this week. But then something odd happened. A “Thought Exercise” set in June 2028 “detailing the progression and fallout of the Global Intelligence Crisis” (ie science fiction), published on 23 February, may have tanked the share price of IBM later that day. The fall definitely happened, with IBM’s share price falling 13%, its biggest fall since 2000, alongside smaller falls in other tech stocks.

Source: https://markets.ft.com/data/equities/tearsheet/summary?s=IBM:NYQ

According to the FT:

Investors have recently seized on social media rumours and incremental developments by small AI companies to justify further selling, with a widely circulated blog post by Citrini Research over the weekend describing how AI could hypothetically push the US unemployment rate above 10 per cent by 2028, proving the latest catalyst.

The likelihood of the scenario portrayed is difficult to assess, but the speed with which the total economic collapse happens subsequently as described feels unlikely if not impossible. However the fact that the markets are this jittery tells us something I think. As Carlo Iacono puts it:

We are living through a period in which the gap between “plausible narrative” and “tradeable signal” has collapsed to nearly nothing. When a scenario feels real enough to model, and the underlying anxiety is already there waiting to be organised, fiction and forecast become functionally indistinguishable.

The data underlying the markets hasn’t changed, but the story has. I rest my case.

The disappointing brandy scene from Goldfinger (1964) https://youtu.be/I6COBucJQfE?si=saiV5f80ISSB3FGY

Politics is a bit depressing this week, so I thought instead I would focus on the asymmetry of our attitudes towards different high octane liquids.

I remember when I first got interested in wine. It was the early noughties and I was out at a restaurant in Cardiff called Le Cassoulet (no longer trading under that name I understand) with my then boss who liked to hit his expense account pretty hard from time to time. The sommelier seemed to know him quite well and scurried off to get him some particularly old claret to accompany the meal. I think it was from 1972 or thereabouts. I remember noting that it had a different colour (brown) from the red wine I was used to drinking and, when sipped, there were a lot of different flavours and smells competing for my attention. Something which I later heard described as “complexity”. From then on I realised that wine drinking could involve something a bit more than just something nice in a glass to accompany a meal.

The journey of alcoholic drinks from drinks to luxury consumer items and assets is nicely illustrated by the Bond franchise. There are a number of movies we could choose but let’s go for Goldfinger, shall we?

In the disappointing brandy scene from Goldfinger, we have this exchange between M, Bond and the Governor of the Bank of England, Colonel Smithers:

Smithers: “Have a little more of this rather disappointing brandy.”

M: “What’s the matter with it?”

Bond: “I’d say it was a 30-year-old Fine indifferently blended, sir…with an overdose of Bons Bois.”

M: “Colonel Smithers is giving the lecture, 007.”

Now first of all, that is clearly not what the Governor of the Bank of England looks like. As readers of this blog already know, he looks like this:

That scene is also notable for including a brief discussion of how the relative value of gold held at the US and British central banks at the time was used “to establish respectively the true value of the dollar and the pound”. In 1964 this would have been via the London Gold Pool, running between 1961 and 1968, by which a group of eight central banks including the United States Fed and the Bank of England agreed to cooperate in maintaining the Bretton Woods System of fixed-rate convertible currencies and defending the gold price. Ian Fleming’s book, written in 1959, predated this arrangement, but the anxieties about the gold market which led to its creation would have been very much around. So we still have the Governor (meeting Bond alone rather than with M) saying (during a lecture which went on for 10 pages):

We can only tell what the true strength of the pound is, and other countries can only tell it, by knowing the amount of valuta we have behind our currency.

Valuta is a rare word, from American English, for the value of one currency in terms of its exchange rate with another, and perhaps an odd one for the Governor of the Bank of England to use. But it is clear that Bond is sent after Goldfinger primarily for economic reasons (finding a way to smuggle large amounts of gold across borders threatens the Bank of England’s cosy little gold club) rather than because (spoiler alert) Goldfinger thinks nothing of murdering people (quite a lot of people in the case of Operation Grand Slam) who get in his way, cheating at golf, employing butlers with lethal bowlers, slicing through things with gold lasers and planting nuclear devices in Fort Knox. Released shortly after Ian Fleming’s death, it was the last Bond movie he saw in production.

It is the same film in which Bond obsesses about getting his favourite champagne (Dom Perignon 1953 – Bond was also someone not afraid to hit his expense account pretty hard from time to time) chilled to 38°F (3.3°C) before he gets bashed on the back of the head and the girl he is with (Goldfinger’s assistant, Jill Masterson, played by Shirley Eaton) gets sprayed from head to toe with gold paint. Perhaps more than any other brand, Bond linked luxury and high octane liquids of various kinds.

Skip forward a few decades and some of it has clearly stopped being something to drink at all, but instead a, very fragile, status asset for the very rich to demonstrate their status to each other. Here are the top prices achieved by wine at auction from one website, 8 of the 10 of them pre-dating both me and Goldfinger:

Source: vinovest https://www.vinovest.co/blog/25-most-expensive-wines-in-the-world-2026

Contrast this with the way we have treated fossil fuels. As Luke Kemp points out in Goliath’s Curse:

We tend to forget that fossil fuels come primarily from long-dead plants and animals. These organisms died between 360 and 286 million years ago during the Carboniferous period, after capturing sunlight through photosynthesis or other means. It is that fossilised energy that we are consuming. According to one estimate, it would take 400 years of global photosynthesis to power the modern world for one year. It takes ninety-eight tons of organic matter buried during the Carboniferous to become just five litres of petrol. We are now a high-energy Goliath, powered by dead matter.

According to a petrol price checker from earlier this week, the garage closest to me currently sells unleaded petrol for £1.29 a litre. So 98 tones of organic matter curated for 300 million years retails for £6.45. That’s less than half the price of a sausage bap and a coffee from Costa via UberEats:

But apparently it’s still not cheap enough.

Most of the content from this article recommending eternal vigilance despite the cheapest prices for 5 years and the claims that “petrol is still 6p too high at the pumps” comes from Howard Cox, founder of FairFuelUK. Whose website includes this picture with a not-too-presumptious-claim-at-all below it:

Even if you weren’t concerned with climate change or the health effects of petrol fumes in the air, this seems like a strange hill for anyone to be dying on. And dying we are. According to the 2025 Global Report of the Lancet Countdown average global heat-related mortality has now reached 546,000 pa, up 63% in just over 20 years:

And that’s just heat. A recent report from the Royal College of Physicians: A Breath of Fresh Air estimated 30,000 deaths from air pollution each year, of which car emissions form an important component.

By the time even the Bond franchise had started worrying about environmental concerns in 2008 with Quantum of Solace, a Somerset Maughamish short story converted into an attempt by a sinister organisation to become the water monopoly in Bolivia through underhand means, the iconic shot of the woman covered in gold had become a female consular employee (Strawberry Fields, played by Gemma Arterton) drowned in oil:

Source; http://007magazine.co.uk/factfiles/factfiles_trivia5.htm

We currently pay between £3.12 and £7.09 per litre in duty on wine, depending on strength, and £0.53 per litre in duty on petrol.

Our attitude to different types of high octane liquids has clearly been nuts in all kinds of ways for a long time. But it is just part of our political frostbite at the moment: we allow our living organisations and institutions to remain frozen in time because we have always done things that way, regardless of the living tissue we are killing in the process. From the endless cycle of public inquiries and ignored recommendations to our use of economics to rationalise things we have already decided to do to batting on with traditional exams: it seems we are just going to do what we are going to do. And freezing fuel duty now looks like it needs to be added to that list.

We can laugh at Trump for accepting an award of “undisputed champion of beautiful clean coal” by the Washington Coal Club and legislating that black is now white by revoking the Environmental Protection Agency’s scientific ruling from 2009 about the harms of climate change. But Trump does at least think he needs a reason to support the fossil fuel industry, even if he needs to make one up. We are just doing it because our politics has gangrene.

I have spent many days in rooms with groups of men (always men) anxious about their future income, where I advised them on how much to ask their companies for. Most of my clients as a scheme actuary were trustees of pension schemes of companies which had seen better days, and who were struggling to make the necessary payments to secure the benefits already promised, let alone those to come. One by one, those schemes stopped offering those future benefits and just concentrated on meeting the bill for benefits already promised. If an opportunity came to buy those benefits out with an insurance company (which normally cost quite a bit more than the kind of “technical provisions” target the Pensions Regulator would accept), I lobbied hard to get it to happen. In many cases we were too late though, the company went bust and we moved it into the Pension Protection Fund instead. That was the life of a pensions actuary in the West Midlands in the noughties. I was often “Mr Good News” in those meetings, the ironic reference to the man constantly moving the goalposts for how much money the scheme needed to meet those benefits bills. I saw my role as pushing the companies to buy out funding if at all possible. None of the schemes I advised had a company behind them which could sustain ongoing pension costs long term. I would listen to the wishful thinking and the corporate optimism, smile and push for the “realistic” option of working towards buy out.

Then I went to work at a university, and found myself, for the first time since 2003, a member of an open defined benefit pension scheme. It was (and still is) a generous scheme, but was constantly complained about by the university lecturers who comprised most of its membership. I didn’t see any way that it was affordable for employers which seemed to struggle to employ enough lecturers, were very reluctant to award anything other than fixed term contracts, and had an almost feudal relationship with their PhD students and post docs. Staff went on strike about plans to close the scheme to future accrual and replace it with the most generous money purchase scheme I had ever seen. I demurred and wrote an article called Why I Won’t Strike. I watched in wonder when even actuarial lecturers at other universities enthusiastically supported the strike. However, over 10 years later, that scheme – the UK’s biggest – is still open. And I gained personally from continued active membership until 2024.

Now don’t get me wrong, I still think the UK university sector is wrong to maintain, unique amongst its peers, a defined benefit scheme. The funding requirement for it has been inflated by continued accrual over the last 8 years and therefore so has the risk it will spike at just the time when it is least affordable, a time which may soon be approaching with 45% of universities already reporting deficits. However the strike demonstrated how important the pension scheme was to staff, something the constant grumbling before the strike had led university managers to doubt. And, once the decision had been made to keep the scheme open to future accrual, I had no more to add as an actuary. Other actuaries had the responsibility for advising on funding, in fact quite a lot of others as the UCU was getting its own actuarial advice alongside that the USS was getting, but my involvement was now just that of a member, just one with a heightened awareness of the risks the employers were taking.

The reason I bring this up is because I detected something of the same position as my lonely one from the noughties amongst the group of actuaries involved in the latest joint report from the Institute and Faculty of Actuaries and the University of Exeter about the fight to maintain planetary climate solvency.

It very neatly sets out the problem, that the whole system of climate modelling and policy recommendations to date has been almost certainly underestimating how much warming is likely to result from a given increase in the level of carbon dioxide in the atmosphere. Therefore all the “carbon budgets” (amount we can emit before we hit particular temperature levels) have been assumed to be higher than they actually are and estimates for when we exhaust them have given us longer than we actually have. This is due to the masking effects of particulate pollution in the air, which has resulted in around 0.5C less warming than we would otherwise have had by now. However, efforts to remove sulphur from oil and coal fuels (themselves important for human health) have acted to reduce this aerosol cooling effect. The goalposts have moved.

An additional reference I would add to the excellent references in the report is Hansen’s Seeing the Forest for the Trees, which concisely summarises all the evidence to suggest the generally accepted range for climate sensitivity is too low.

So far, so “Mr Good News”. And for those who say this is not something actuaries should be doing because they are not climate experts, this is exactly what actuaries have always done. We started the profession by advising on the intersection between money and mortality, despite not being experts in any of the conditions which affected either the buying power of money or the conditions which affected people’s mortality. We could however use statistics to indicate how things were likely to go in general, and early instances of governments wasting quite a lot of money without a steer from people who understood statistics got us that gig, and a succession of other related gigs over the years ahead.

The difficult bit is always deciding what course of action you want to encourage once you have done the analysis. This was much easier in pensions, as there was a regulatory framework to work to. It is much harder when, as in this case, it involves proposing changes in behaviour which are ingrained into our societies. If university lecturers can oppose something that is clearly not in the long term financial interests of their employers and push for something which makes their individual employers less secure, then how much more will the general public resist change when they can see no good reason for it.

And in this regard this feels like a report mostly focused on the finance industry. The analogies it makes with the 2008 financial crash, constant comparisons with the solvency regulatory regimes of insurers in particular and even the framing of the need to mitigate climate change in order to support economic growth are all couched in terms familiar to people working in the finance sector. This has, perhaps predictably, meant that the press coverage to date has mostly been concentrated in the pension, insurance and investment areas:

However in the case of the 2008 crash, the causes were able to be addressed by restricting practices amongst the financial institutions which had just been bailed out and were therefore in no position to argue. Many of those restrictions have been loosened since, and I think many amongst the general public would question whether the decision to bail out the banks and impose austerity on everyone else is really a model to follow for other crises.

The next stage will therefore need to involve breaking out of the finance sector to communicate the message more widely, perhaps focusing on the first point in the proposed Recovery Plan: developing a different mindset. As the report says:

This challenge demands a shift in perspective, recognising that humanity is not separate from nature but embedded in it, reliant on it and, furthermore, now required to actively steward the Earth system.
To maintain Planetary Solvency, we need to put in place mechanisms to ensure our social, economic, and political systems respect the planet’s biophysical limits, thus preserving or restoring sufficient natural capital for future generations to continue receiving ecosystem services…

…The prevailing economic system is a risk driver and requires reform, as economic dependency on nature is unrecognised in dominant economic theory which incorrectly assumes that natural capital is substitutable by manufactured capital. A particular barrier to climate action has been lobbying from incumbents and misinformation which has contributed to slower than required policy implementation.

By which I assume they mean this type of lobbying:

And this is where it gets very difficult, because actuaries really do not have anything to add at this point. We are just citizens with no particular expertise about how to proceed, just a heightened awareness of the dangers we are facing if we don’t act.

But we can also, as the report does, point out that we still have agency:

Although this is daunting, it means we have agency – we can choose to manage human activity to minimise the risk of societal disruption from the loss of critical support services from nature.

This point chimes with something else I have been reading recently (and which I will be writing more about in the coming weeks): Samuel Miller McDonald’s Progress. As he says “never before have so many lives, human and otherwise, depended on the decisions of human beings in this moment of history”. You may argue the toss on that with me, which is fine, but, in view of the other things you may be scrolling through either side of reading this, how about this for a paragraph putting the whole question of when to change how we do things in context:

We are caught in a difficult trap. If everything that is familiar is torn down and all the structures that govern our day-to-day disintegrated, we risk terrible disorder. We court famines and wars. We invite power vacuums to be filled by even more brutal psychopaths than those who haunt the halls of power now. But if we don’t, if we continue on the current path and simply follow inertia, there is a good chance that the outcome will be far worse than the disruption of upending everything today. Maintaining status-quo trajectories in carbon emissions, habitat destruction and pollution, there is a high likelihood of collapse in the existing structure anyway. It will just occur under far worse ecological conditions than if it were to happen sooner, in a more controlled way. At least, that is what all the best science suggests. To believe otherwise requires rejecting science and knowledge itself, which some find to be a worthwhile trade-off. But reality can only be denied for so long. Dream at night we may, the day will ensnare us anyway.

One thing I never did in one of those rooms full of anxious men was to stand up and loudly denounce the pensions system we were all working within. Actuaries do not behave like that generally. However we have a senior group of actuaries, with the endorsement of their profession, publishing a report that says things like this (bold emphasis added by me):

Planetary Solvency is threatened and a recovery plan is needed: a fundamental, policy-led change of direction, informed by realistic risk assessments that recognise our current market-led approach is failing, accompanied by an action plan that considers broad, radical and effective options.

This is not a normal situation. We should act accordingly.

Source: https://xkcd.com/2415/ licence at: https://creativecommons.org/licenses/by-nc/2.5/

Happy new year all! New year, new banner, courtesy of my brilliant daughter who presented me with a plausible 3-D model of my very primitive cartoon of a reverse-centaur over Christmas. And I thought I would kick off with a relatively uncontentious subject: examinations!

“Back to normal!” That was the cry throughout education when the pandemic had finally ended enough for us to start cramming students into rooms again. The universities had all leveraged themselves to the maximum, and perhaps beyond, to add to the built estate, so as to entice students in both the overseas and the uncapped domestic market to their campuses, and one by-product of this was they had plenty of potential examination halls. So let’s get away from all of that electronic remote nonsense and get everyone in a room together where you can keep an eye on them and stop them cheating. This united the purists who yearned for the days of 10% of the cohort turning up for elite education via chalk and talk rather than the 50% we have today, senior management needing to justify the size of the built estate and politicians who kept referring to traditional exams in an exam hall as the “gold standard”.

So, in a time when students have access to information, tools, how to videos of everything imaginable, the entire output of the greatest minds of thousands of years of human history, as well as many of the less than great minds, in short anything which has ever caught anyone’s attention and been committed to some form of media: in this of all times, we want to sort the students into categories for the existing job market based on how they answer academic questions about what they can remember unaided about the content of their lecture courses and reading lists with a biro on a pad of paper perched precariously on a tiny wooden table surrounded by hundreds of other similar scribblers, for a set period of time as minders wander the floors like Victorian factory owners.

And for institutions that thought the technology we fast-tracked for education delivery and assessment in the pandemic would surely be part of education’s future? Or perhaps they just can’t afford to borrow half a billion or have the the land available to construct more cathedrals of glass and brick to house more examination halls? Simple! We just create the conditions for that gold standard examination right there in the student’s own bedroom or the company they work for!

There are 54 pages to the Institute and Faculty of Actuaries’ (IFoA’s) guidance for remotely invigilated candidates. It covers everything from the minimum specification of equipment you need, including the video camera to watch your every movement and the microphone to pick up every sound you make, to the proprietary spying software (called “Guardian Browser”) you will need to download onto your own computer, how to prove who you are to the system, what you are allowed to have in your bedroom with you and even how you need to sit for the duration of the exam (with a maximum of two 5 minute breaks) to ensure the system has sufficient visibility of you at all times:

These closed book remote arrangements replaced the previous open book online exams which most institutions operated during the pandemic. The reason given was that the exam results shot up so much that widespread cheating was suspected and the integrity of the qualifications was at risk. The IFoA’s latest assessment regulations can be found here.

The belief in examinations is very widespread. A couple of months ago I was discussing the teacher assessments which replaced them briefly during the pandemic with a secondary business studies teacher. He took great pride in the fact that he based his assessments solely on mock results, ie an assessment carried out before all of the syllabus had been covered and when students were unaware it would be the final assessment. But still in his mind more “objective” than any opinion he might have of his own students.

If a large language model can perform enormously better in an examination than your students can without it, what it actually demonstrates is that the traditional examination is woefully unprepared for the future. As Carlo Iacono puts it:

The machines learned from us.

They learned what we actually valued and it turned out to be different from what we said we valued.

We said we valued originality. We rewarded conformity to genre. We said we valued depth. We measured surface features. We said we valued critical thinking. We gave higher marks to confident assertion than to honest uncertainty.

So now the machines produce what the world trained them to produce: fluent, confident, passable output that fits the shapes we reward.

And we’re horrified. Not because they stole something from us. Because they showed us what the systems were selecting for all along.

The scandal isn’t that a model can imitate student writing. The scandal is that we built an educational and professional culture where imitation passes as competence, and then acted shocked when a machine learned to imitate faster.

We trained the incentives. We trained the rubrics. We trained the career ladders.

The pattern recognition which gets you through most formal examinations is just too cheap and easy to automate now. It is no longer a useful skill, even by proxy. It might as well be Hogwarts’ sorting hat for all the use it is in a post scarcity education world. If the machines have worked out how to unlock the elaborate captcha system we have placed around our gold standard assessments, an arms race of security measures protecting a range of tests which look increasingly narrow compared to the capabilities which matter does not seem like the way to go.

What instead we are doing is identifying which students are prepared to put themselves through literally anything to get the qualification. Companies like students like that. They will make ideal reverse-centaurs. The description of life as a reverse-centaur even sounds like the experience of a proctored exam:

Like an Amazon delivery driver, who sits in a cabin surrounded by AI cameras, that monitor the driver’s eyes and take points off if the driver looks in a proscribed direction, and monitors the driver’s mouth because singing isn’t allowed on the job, and rats the driver out to the boss if they don’t make quota.

The driver is in that van because the van can’t drive itself and can’t get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance. But the driver is human, so the van doesn’t just use the driver. The van uses the driver up.

Source: Cory Doctorow, Enshittification

And, even if you are OK with all of that, all of these privacy intrusions don’t even work to prevent cheating! The ACCA, the world’s largest accounting professional body, has just announced it is stopping all remote exams after giving up the arms race against the cheats, facilitated in some cases seemingly by their Big Four employers lying about what had gone on.

Actuarial exams started in 1850, only 2 years after the Institute of Actuaries was established (Dermot Grenham wrote about them recently here). This pre-dated the establishment of the first examination boards by a few years (1856 Society of Arts, the Society for the encouragement of Arts, Manufactures and Commerce, later the Royal Society for the encouragement of Arts, Manufactures and Commerce (Royal Society of Arts); 1857: University of Oxford Delegacy of Local Examinations (founded by the University of Oxford); and 1858: University of Cambridge Local Examinations Syndicate (UCLES, founded by the University of Cambridge)), so keen were actuaries to institute examinations. However it was the massive expansion of the middle classes as the Industrial Revolution disrupted society in so many ways that led to the need for a new sorting hat beyond the capacity of the oral examinations that had previously been the norm.

Now people seem to be lining up to drag everyone back into the examination hall. Any suggestion of a retreat from traditional exams is met by howls of outrage from people like Sam Leith at The Spectator about lack of “rigour”. However, in my view, they are wrong.

Yes of course you can isolate students from every intellectual aid they would normally use, as a centaur, to augment their performance, limit the sources they can access, force them to rely on their own memories entirely, and put them under significant time pressure. You will definitely reduce marks by doing that. So that has made it harder and therefore more rigorous and more objective, right?

Well according to the Merriam-Webster dictionary, rigorous is a synonym of rigid, strict or stringent. However, while all these words mean extremely severe or stern, rigorous implies the imposition of hardship and difficulty. So promoting exams above all as an exercise in rigour reveals their true nature as a kind of punishment beating in written form, for which the prize for undergoing it is whatever it qualifies you for. Suddenly the sorting hat looks relatively less arbitrary.

The problems of traditional exams are well known, but the most important ones in my view are that they measure a limited range of abilities and therefore are unlikely to show what students can really achieve. Harder does not mean more objective. It is like deciding who can act by throwing students out, one at a time, in front of a baying mob of, let’s say for argument, readers of The Spectator. Sure, some of the students might be able to calm the crowd, some may even be able to redirect their anger towards a different target. But are the people who can play Mark Antony for real necessarily the best all-round actors? And has someone who can only stand frozen on the spot under those circumstances really proved that they could never act well?

It also means that education ends a month or more before the exams, to allow the appropriate cramming, followed by engaging all of the teaching staff in the extended exercise of marking, checking and moderating what has been written in answer to academic questions about what the students can remember unaided about the content of their lecture courses and reading lists with a biro on a pad of paper perched precariously on a tiny wooden table surrounded by hundreds of other similar scribblers, for a set period of time as minders wander the floors like Victorian factory owners. But what if instead the assessment was part of the teaching process? What if students felt that their assessment had been a meaningful part of their educational experience? What if, instead of arguing the toss over whether they scored 68% or 70% on an assessment, students could see for themselves whether they had demonstrated mastery of their subject.

One model of assessment which is getting a lot of attention at the moment, one I am a big fan of having used it at the University of Leicester on some modules, is something called interactive oral assessment, where students meet with a lecturer or tutor, individually or in a small group, and answer questions about work they have already submitted. It is a highly demanding form of assessment, for both the students and the assessors, but it means the final assessment is done with the student present and, with careful probing from the assessors, who will obviously need to have done a close reading of the project work beforehand, you can be highly confident of the degree to which the student understands the work they have submitted. It also allows the student to submit a piece of work of more complexity and ambition than can be accommodated by a traditional exam. And it needn’t take any more time if the interviews are carried out online when set against the exam marking time of the traditional exam. Something which all the technology we developed through the pandemic allows us to do, without the need for spyware.

There are other models which also assess the technological centaurs we wish our students to become rather than the reverse-centaurs we are currently dooming too many to become. It is looking like it may be time to start telling students to stop writing and to put down their pens on the traditional exam. And perhaps the actuarial profession, who led us into the era of professional written examinations so enthusiastically 175 years ago, might now want to take the lead in navigating our way out of them?

New (left) and old (right) Naiku shrines during the 60th sengu at Ise Jingu, 1973, via Bock 1974

In his excellent new book, Breakneck, Dan Wang tells the story of the high-speed rail links which started to be constructed in 2008 between San Francisco and Los Angeles and between Beijing and Shanghai respectively. Both routes would be around 800 miles long when finished. The Beijing-Shanghai line opened in 2011 at a cost of $36 billion. To date, California has built only a small stretch of their line, as yet nowhere near either Los Angeles or San Francisco, and the latest estimate of the completed bill is $128 billion. Wang uses this, amongst other examples to draw a distinction between the engineering state of China “building big at breakneck speed” and the lawyerly society of the United States “blocking everything it can, good and bad”.

Europe doesn’t get much of a mention, other than to be described as a “mausoleum”, which sounds rather JD Vance and there is quite a lot about this book that I disagree with strongly, which I will return to. However there is also much to agree with in this book, and none more so than when Wang talks about process knowledge.

Wang tells another story, of Ise Jingu in Japan. Every 20 years exact copies of Naiku, Geku, and 14 other shrines here are built on vacant adjacent sites, after which the old shrines are demolished. Altogether 65 buildings, bridges, fences, and other structures are rebuilt this way. They were first built in 690. In 2033, they will be rebuilt for the 63rd time. The structures are built each time with the original 7th century techniques which involve no nails, just dowels and wood joints. Staff have a 200 year tree planting plan to ensure enough cypress trees are planted to make the surrounding forest self-sufficient. The 20 year intervals between rebuilding are the length of the generations, the older passing on the techniques to the younger.

This, rather like the oral tradition of folk stories and songs, which were passed on by each generation as contemporary narratives until they were all written down and fixed in time so that they quickly appeared old-fashioned thereafter, is an extreme example of process knowledge. What is being preserved is not the Trigger’s Broom of temples at Ise Jingu, but the practical knowledge of how to rebuild them as they were originally built.

Trigger’s Broom. Source: https://www.youtube.com/watch?v=BUl6PooveJE

Process knowledge is the know-how of your experienced workforce that cannot easily be written down. It can develop where such a workforce work closely with researchers and engineers to create feedback loops which can also accelerate innovation. Wang contrasts Shenzhen in China where such a community exists, with Silicon Valley where it doesn’t, forcing the United States to have such technological wonders as the iPhone manufactured in China.

What happens when you don’t have process knowledge? Well one example would be our nuclear industry, where lack of experience of pressurised water reactors has slowed down the development of new power stations and required us to rely considerably on French expertise. There are many other technical skill shortages.

China has recognised the supreme importance of process knowledge as compared to the American concern with intellectual property (IP). IP can of course be bought and sold as a commodity and owned as capital, whereas process knowledge tends to rest within a skilled workforce.

This may then be the path to resilience for the skilled workers of the future in the face of the AI-ification of their professions. Companies are being sold AI systems for many things at the moment, some of which will clearly not work with few enough errors, or without so much “human validation” (a lovely phrase a good friend of mine actively involved in integrating AI systems into his manufacturing processes used recently) that they are not deemed practical. For early career workers entering these fields the demonstration of appropriate process knowledge, or the ability to develop it very quickly, may be the key to surviving the AI roller coaster they face over the next few years. Actionable skills and knowledge which allow them to manage such systems rather than being managed by them. To be a centaur rather than a reverse-centaur.

Not only will such skills make you less likely to lose your job to an AI system, they will also increase your value on the employment market: the harder these skills and knowledge are to acquire, the more valuable they are likely to be. But whereas in the past, in a more static market, merely passing your exams and learning coding might have been enough for an actuarial student for instance, the dynamic situation which sees everything that can be written down disappearing into prompts in some AI system will make such roles unprotected.

Instead it will be the knowledge about how people are likely to respond to what you say in a meeting or write in an email or report, and the skill to strategise around those things, knowing what to do when the rules run out, when situations are genuinely novel, ie putting yourself in someone else’s shoes and being prepared to make judgements. It will be the knowledge about what matters in a body of data, putting the pieces together in meaningful ways, and the skills to make that obvious to your audience. It will be the knowledge about what makes everyone in your team tick and the skills to use that knowledge to motivate them to do their best work. It will ultimately be about maintaining independent thought: the knowledge of why you are where you are and the skill to recognise what you can do for the people around you.

These have not always been seen as entry level skills and knowledge for graduates, but they are increasingly going to need to be as the requirement grows to plug you in further up an organisation if at all as that organisation pursues its diamond strategy or something similar. And alongside all this you will need a continuing professional self-development programme on steroids going on to fully understand the systems you are working with as quickly as possible and then understand them all over again when they get updated, demanding evidence and transparency and maintaining appropriate uncertainty when certainty would be more comfortable for the people around you, so that you can manage these systems into the areas where they can actually add value and out of the areas where they can cause devastation. It will be more challenging than transmitting the knowledge to build a temple out of hay and wood 20 years into the future, and will be continuous. Think of it as the Trigger’s Broom Process of Career Management if you like.

These will be essential roles for our economic future: to save these organisations from both themselves and their very expensive systems. It will be both enthralling and rewarding for those up to the challenge.

The warehouse at the end of Raiders of the Lost Ark

In the year when I was born, Malvina Reynolds recorded a song called Little Boxes when she was a year younger than I am now. If you haven’t heard it before, you can listen to it here. You might want to listen to it while you read the rest of this.

I remember the first time I felt panic during the pandemic. It was a couple of months in, we had been working very hard: to put our teaching processes online, consulting widely about appropriate remote assessments and getting agreement from the Institute and Faculty of Actuaries (IFoA) for our suggested approach at Leicester, checking in with our students, some of who had become very isolated as a result of lockdowns, and a million other things. I was just sitting at my kitchen table and suddenly I felt tears welling up and I was unable to speak without my voice breaking down. It happened at intervals after that, usually during a quiet moment when I, consciously or unconsciously, had a moment to reflect on the enormity of what was going on. I could never point to anything specific that triggered it, but I do know that it has been a permanent change about me, and that my emotions have been very much closer to the surface ever since. I felt something similar again this morning.

What is going on? Well I haven’t been able to answer that satisfactorily until now, but recently I read an article by David Runciman in the LRB from nine years ago when Donald Trump got elected POTUS the first time. I am not sure that everything in the article has withstood the test of time, but in it Runciman makes the case for Trump being the result of the people wanting “Trump to shake up a system that they also expected to shield them from the recklessness of a man like Trump.”. And this part looks prophetic:

[Trump is]…the bluntest of instruments, indiscriminately shaking the foundations with nothing to offer by way of support. Under these conditions, the likeliest response is for the grown-ups in the room to hunker down, waiting for the storm to pass. While they do, politics atrophies and necessary change is put off by the overriding imperative of avoiding systemic collapse. The understandable desire to keep the tanks off the streets and the cashpoints open gets in the way of tackling the long-term threats we face. Fake disruption followed by institutional paralysis, and all the while the real dangers continue to mount. Ultimately, that is how democracy ends.

And it suddenly hit me that this was something I had indeed taken for granted my whole life until the pandemic came along. The only thing that had ever looked like toppling society itself was the prospect of a nuclear war. Otherwise it seemed that our political system was hard to change and impossible to kill.

And then the pandemic came along and we saw government national and local digging mass graves and then filling them in again and setting aside vast arenas for people to die in before quietly closing them again. Rationing of food and other essentials was left to the supermarkets to administer, as were the massive snaking socially-distanced queues around their car parks. Seemingly arbitrary sets of rules suddenly started appearing at intervals about how and when we were allowed to leave the house and what we were allowed to do when out, and also how many people we could have in our houses and where they were allowed to come from. Most businesses were shut and their employees put on the government’s payroll. We learned which of us were key workers and spent a lot of time worrying about how we could protect the NHS, who we clapped every Thursday. It was hard to maintain the illusion that society still provided solid ground under our feet, particularly if we didn’t have jobs which could be moved online. Whoever you were you had to look down at some point, and I think now that I was having my Wile E. Coyote moment.

The trouble is, once you have looked down, it is hard to put that back in a box. At least I thought so, although there seems to have been a lot of putting things in boxes going on over the last few years. The UK Covid-19 Inquiry has made itself available online via a YouTube channel, but you might have thought that a Today at the Inquiry slot on terrestrial TV would have been more appropriate, not just covering it when famous people are attending. What we do know is that Patrick Vallance, Chief Scientific Advisor throughout the pandemic, has said that another pandemic is “absolutely inevitable” and that “we are not ready yet” for such an eventuality. Instead we have been busily shutting that particular box.

The biggest box of course is climate change. We have created a really big box for that called the IPCC. As the climate conferences migrate to ever more unapologetic petro-states, protestors are criminalised and imprisoned and emissions continue to rise, the box for this is doing a lot of work.

And then there are all the NHS boxes. As Roy Lilley notes:

If inquiries worked, we’d have the safest healthcare system in the world. Instead, we have a system addicted to investigating itself and forgetting the answers.

But perhaps the days of the box are numbered. The box Keir Starmer constructed to contain the anger about grooming gangs which the previous 7 year long box had been unable to completely envelop also now appears to be on the edge of collapse. And the Prime Minister himself was the one expressing outrage when a perfectly normal British box, versions of which had been giving authority to policing decisions since at least the Local Government (Review of Decisions) Act 2015 (although the original push to develop such systems stemmed from the Hillsborough and Heysel disasters in 1989 and 1985 respectively) suddenly didn’t make the decision he was obviously expecting. That box now appears to be heading for recycling if Reform UK come to power, which is, of course, rather difficult to do in Birmingham at the moment.

But what is the alternative to the boxes? At the moment it does not look like it involves confronting our problems any more directly. As Runciman reflected on the second Trump inauguration:

Poor Obama had to sit there on Monday and witness the mistaking of absolutism for principle and spectacle for politics. I don’t think Trump mistakes them – he doesn’t care enough to mind what passes for what. But the people in the audience who got up and applauded throughout his speech – as Biden and Harris and the Clintons and the Bushes remained glumly in their seats – have mistaken them. They think they will reap the rewards of what follows. But they will also pay the price.

David Allen Green’s recent post on BlueSky appears to summarise our position relative to that of the United States very well:

To Generation Z: a message of support from a Boomer

So you’ve worked your way through school and now university, developing the skills you were told would always be in high demand, credentialising yourself as a protection against the vagaries of the global economy. You may have serious doubts about ever being able to afford a house of your own, particularly if your area of work is very concentrated in London…

…and you resent the additional tax that your generation pays to support higher education:

Source: https://taxpolicy.org.uk/2023/09/24/70percent/

But you still had belief in being able to operate successfully within the graduate market.

A rational functional graduate job market should be assessing your skills and competencies against the desired attributes of those currently performing the role and making selections accordingly. That is a system both the companies and graduates can plan for.

It is very different from a Rush. The first phenomenon known as a Rush was the Californian Gold Rush of 1848-55. However the capitalist phenomenon of transforming an area to facilitate intensive production probably dates from sugar production in Madeira in the 15th century. There have been many since, but all neatly described by this Punch cartoon from 1849:

A Rush is a big deal. The Californian Gold Rush resulted in the creation of California, now the 5th largest economy in the world. But when it comes to employment, a Rush is not like an orderly jobs market. As Carlo Iacono describes, in an excellent article on the characteristics of the current AI Rush:

The railway mania of the 1840s bankrupted thousands of investors and destroyed hundreds of companies. It also left Britain with a national rail network that powered a century of industrial dominance. The fibre-optic boom of the late 1990s wiped out about $5 trillion in market value across the broader dot-com crash. It also wired the world for the internet age.

A Rush is a difficult and unpredictable place to build a career, with a lot riding on dumb luck as much as any personal characteristics you might have. There is very little you can count on in a Rush. This one is even less predictable because as Carlo also points out:

When the railway bubble burst in the 1840s, the steel tracks remained. When the fibre-optic bubble burst in 2001, the “dark fibre” buried in the ground was still there, ready to carry traffic for decades. These crashes were painful, but they left behind durable infrastructure that society could repurpose.

Whereas the 40–60% of US real GDP growth in the first half of 2025 explained by investment in AI infrastructure isn’t like that:

The core assets are GPUs with short economic half-lives: in practice, they’re depreciated over ~3–5 years, and architectures are turning over faster (Hopper to Blackwell in roughly two years). Data centres filled with current-generation chips aren’t valuable, salvageable infrastructure when the bubble bursts. They’re warehouses full of rapidly depreciating silicon.

So today’s graduates are certainly going to need resilience, but that’s just what their future employers are requiring of them. They also need to build their own support structures which are going to see them through the massive disruption which is coming whether or not the enormous bet on AI is successful or not. The battle to be centaurs, rather than reverse-centaurs, as I set out in my last post (or as Carlo Iacono describes beautifully in his discussion of the legacy of the Luddites here), requires these alliances. To stop thinking of yourselves as being in competition with each other and start thinking of yourselves as being in competition for resources with my generation.

I remember when I first realised my generation (late Boomer, just before Generation X) was now making the weather. I had just sat a 304 Pensions and Other Benefits actuarial exam in London (now SP4 – unsuccessfully as it turned out), and nipped in to a matinee of Sam Mendes’ American Beauty and watched the plastic bag scene. I was 37 at the time.

My feeling is that despite our increasingly strident efforts to do so, our generation is now deservedly losing power and is trying to hang on by making reverse centaurs of your generation as a last ditch attempt to remain in control. It is like the scene in another movie, Triangle of Sadness, where the elite are swept onto a desert island and expect the servant who is the only one with survival skills in such an environment to carry on being their servant.

Don’t fall for it. My advice to young professionals is pretty much the same as it was to actuarial students last year on the launch of chartered actuary status:

If you are planning to join a profession to make a positive difference in the world, and that is in my view the best reason to do so, then you are going to have to shake a few things up along the way.

Perhaps there is a type of business you think the world is crying out for but it doesn’t know it yet because it doesn’t exist. Start one.

Perhaps there is an obvious skill set to run alongside your professional one which most of your fellow professionals haven’t realised would turbo-charge the effectiveness of both. Acquire it.

Perhaps your company has a client who noone has taken the time to put themselves in their shoes and communicate in a way they will properly understand and value. Be that person.

Or perhaps there are existing businesses who are struggling to manage their way in changing markets and need someone who can make sense of the data which is telling them this. Be that person.

All why remaining grounded in which ever community you have chosen for yourself. Be the member of your organisation or community who makes it better by being there.

None of these are reverse centaur positions. Don’t settle for anything less. This is your time.

In 2017, I was rather excitedly reporting about ideas which were new to me at the time regarding how technology or, as Richard and Daniel Susskind referred to it in The Future of the Professions, “increasingly capable machines” were going to affect professional work. I concluded that piece as follows:

The actuarial profession and the higher education sector therefore need each other. We need to develop actuaries of the future coming into your firms to have:

  • great team working skills
  • highly developed presentation skills, both in writing and in speech
  • strong IT skills
  • clarity about why they are there and the desire to use their skills to solve problems

All within a system which is possible to regulate in a meaningful way. Developing such people for the actuarial profession will need to be a priority in the next few years.

While all of those things are clearly still needed, it is becoming increasingly clear to me now that they will not be enough to secure a job as industry leaders double down.

Source: https://www.ft.com/content/99b6acb7-a079-4f57-a7bd-8317c1fbb728

And perhaps even worse than the threat of not getting a job immediately following graduation is the threat of becoming a reverse-centaur. As Cory Doctorow explains the term:

A centaur is a human being who is assisted by a machine that does some onerous task (like transcribing 40 hours of podcasts). A reverse-centaur is a machine that is assisted by a human being, who is expected to work at the machine’s pace.

We have known about reverse-centaurs since at least Charlie Chaplin’s Modern Times in 1936.

By Charlie Chaplin – YouTube, Public Domain, https://commons.wikimedia.org/w/index.php?curid=68516472

Think Amazon driver or worker in a fulfillment centre, sure, but now also think of highly competitive and well-paid but still ultimately human-in-the-loop kinds of roles being responsible for AI systems designed to produce output where errors are hard to spot and therefore to stop. In the latter role you are the human scapegoat, in the phrasing of Dan Davies, “an accountability sink” or in that of Madeleine Clare Elish, a “moral crumple zone” all rolled into one. This is not where you want to be as an early career professional.

So how to avoid this outcome? Well obviously if you have other options to roles where a reverse-centaur situation is unavoidable you should take them. Questions to ask at interview to identify whether the role is irretrievably reverse-centauresque would be of the following sort:

  1. How big a team would I be working in? (This might not identify a reverse-centaur role on its own: you might be one of a bank of reverse-centaurs all working in parallel and identified “as a team” while in reality having little interaction with each other).
  2. What would a typical day be in the role? This should smoke it out unless the smokescreen they put up obscures it. If you don’t understand the first answer, follow up to get specifics.
  3. Who would I report to? Get to meet them if possible. Establish whether they are technical expert in the field you will be working in. If they aren’t, that means you are!
  4. Speak to someone who has previously held the role if possible. Although bear in mind that, if it is a true reverse-centaur role and their progress to an actual centaur role is contingent on you taking this one, they may not be completely forthcoming about all of the details.

If you have been successful in a highly competitive recruitment process, you may have a little bit of leverage before you sign the contract, so if there are aspects which you think still need clarifying, then that is the time to do so. If you recognise some reverse-centauresque elements from your questioning above, but you think the company may be amenable, then negotiate. Once you are in, you will understand a lot more about the nature of the role of course, but without threatening to leave (which is as damaging to you as an early career professional as it is to them) you may have limited negotiation options at that stage.

In order to do this successfully, self knowledge will be key. It is that point from 2017:

  • clarity about why they are there and the desire to use their skills to solve problems

To that word skills I would now add “capabilities” in the sense used in a wonderful essay on this subject by Carlo Iacono called Teach Judgement, Not Prompts.

You still need the skills. So, for example, if you are going into roles where AI systems are producing code, you need to have sufficiently good coding skills yourself to create a programme to check code written by the AI system. If the AI system is producing communications, your own communication skills need to go beyond producing work that communicates to an audience effectively to the next level where you understand what it is about your own communication that achieves that, what is necessary, what is unnecessary, what gets in the way of effective communication, ie all of the things that the AI system is likely to get wrong. Then you have a template against which to assess the output from an AI system, and for designing better prompts.

However specific skills and tools come and go, so you need to develop something more durable alongside them. Carlo has set out four “capabilities” as follows:

  1. Epistemic rigour, which is being very disciplined about challenging what we actually know in any given situation. You need to be able to spot when AI output is over-confident given the evidence, or when a correlation is presented as causation. What my tutors used to refer to as “hand waving”.
  2. Synthesis is about integrating different perspectives into an overall understanding. Making connections between seemingly unrelated areas is something AI systems are generally less good at than analysis.
  3. Judgement is knowing what to do in a new situation, beyond obvious precedent. You get to develop judgement by making decisions under uncertainty, receiving feedback, and refining your internal models.
  4. Cognitive sovereignty is all about maintaining your independence of thought when considering AI-generated content. Knowing when to accept AI outputs and when not to.

All of these capabilities can be developed with reflective practice, getting feedback and refining your approach. As Carlo says:

These capabilities don’t just help someone work with AI. They make someone worth augmenting in the first place.

In other words, if you can demonstrate these capabilities, companies who themselves are dealing with huge uncertainty about how much value they are getting from their AI systems and what they can safely be used for will find you an attractive and reassuring hire. Then you will be the centaur, using the increasingly capable systems to improve your own and their productivity while remaining in overall control of the process, rather than a reverse-centaur for which none of that is true.

One sure sign that you are straying into reverse-centaur territory is when a disproportionate amount of your time is spent on pattern recognition (eg basing an email/piece of coding/valuation report on an earlier email/piece of coding/valuation report dealing with a similar problem). That approach was always predicated on being able to interact with a more experienced human who understood what was involved in the task at some peer review stage. But it falls apart when there is no human to discuss the earlier piece of work with, because the human no longer works there, or a human didn’t produce the earlier piece of work. The fake it until you make it approach is not going to work in environments like these where you are more likely to fake it until you break it. And pattern recognition is something an AI system will always be able to do much better and faster than you.

Instead, question everything using the capabilities you have developed. If you are going to be put into potentially compromising situations in terms of the responsibilities you are implicitly taking on, the decisions needing to be made and the limitations of the available knowledge and assumptions on which those decisions will need to be based, then this needs to be made explicit, to yourself and the people you are working with. Clarity will help the company which is trying to use these new tools in a responsible way as much as it helps you. Learning is going to be happening for them as much as it is for you here in this new landscape.

And if the company doesn’t want to have these discussions or allow you to hamper the “efficiency” of their processes by trying to regulate them effectively? Then you should leave as soon as you possibly can professionally and certainly before you become their moral crumple zone. No job is worth the loss of your professional reputation at the start of your career – these are the risks companies used to protect their senior people of the future from, and companies that are not doing this are clearly not thinking about the future at all. Which is likely to mean that they won’t have one.

To return to Cory Doctorow:

Science fiction’s superpower isn’t thinking up new technologies – it’s thinking up new social arrangements for technology. What the gadget does is nowhere near as important as who the gadget does it for and who it does it to.

You are going to have to be the generation who works these things out first for these new AI tools. And you will be reshaping the industrial landscape for future generations by doing so.

And the job of the university and further education sectors will increasingly be to equip you with both the skills and the capabilities to manage this process, whatever your course title.

In 2017 I posted an article about how the future for actuaries was starting to look, with particular reference to a Society of Actuaries paper by Dodzi Attimu and Bryon Robidoux, which has since been moved to here.

I summarised their paper as follows at the time:

Focusing on…a paper produced by Dodzi Attimu and Bryon Robidoux for the Society of Actuaries in July 2016 explored the theme of robo actuaries, by which they meant software that can perform the role of an actuary. They went on to elaborate as follows:

Though many actuaries would agree certain tasks can and should be automated, we are talking about more than that here. We mean a software system that can more or less autonomously perform the following activities: develop products, set assumptions, build models based on product and general risk specifications, develop and recommend investment and hedging strategies, generate memos to senior management, etc.

They then went on to define a robo actuarial analyst as:

A system that has limited cognitive abilities but can undertake specialized activities, e.g. perform the heavy lifting in model building (once the specification/configuration is created), perform portfolio optimization, generate reports including narratives (e.g. memos) based on data analysis, etc. When it comes to introducing AI to the actuarial profession, we believe the robo actuarial analyst would constitute the first wave and the robo actuary the second wave.

They estimate that the first wave is 5 to 10 years away and the second 15 to 20 years away. We have been warned.

So 9 years on from their paper, how are things looking? Well the robo actuarial analyst wave certainly seems to be pretty much here, particularly now that large language models like ChatGPT are being increasingly used to generate reports. It suddenly looks a lot less fanciful to assume that the full robo actuary is less than 11 years away.

But now the debate on AI appears to be shifting to an argument between whether we are heading for Vernor Vinge’s “Singularity” where the increasingly capable systems

would not be humankind’s “tool” — any more than humans are the tools of rabbits or robins or chimpanzees

on the one hand, and, on the other, the idea that “it is going to take a long time for us to really use AI properly…, because of how hard it is to regear processes and organizations around new tech”.

In his article on Understanding AI as a social technology, Henry Farrell suggests that neither of these positions allow a proper understanding of the impact AI is likely to have, instead proposing the really interesting idea that we are already part way through a “slow singularity”, which began with the industrial revolution. As he puts it:

Under this understanding, great technological changes and great social changes are inseparable from each other. The reason why implementing normal technology is that so slow is that it requires sometimes profound social and economic transformations, and involves enormous political struggle over which kinds of transformation ought happen, which ought not, and to whose benefit.

This chimes with what I was saying recently about AI possibly not being the best place to look for the next industrial revolution. Farrell plausibly describes the current period using the words of Herbert Simon. As Farrell says: “Human beings have quite limited internal ability to process information, and confront an unpredictable and complex world. Hence, they rely on a variety of external arrangements that do much of their information processing for them.” So Simon says of markets, for instance, which:

appear to conserve information and calculation by assigning decisions to actors who can make them on the basis of information that is available to them locally – that is, without knowing much about the rest of the economy apart from the prices and properties of the goods they are purchasing and the costs of the goods they are producing.

And bureaucracies and business organisations, similarly:

like markets, are vast distributed computers whose decision processes are substantially decentralized. … [although none] of the theories of optimality in resource allocation that are provable for ideal competitive markets can be proved for hierarchy, … this does not mean that real organizations operate inefficiently as compared to real markets. … Uncertainty often persuades social systems to use hierarchy rather than markets in making decisions.

Large language models by this analysis are then just another form of complex information processing, “likely to reshape the ways in which human beings construct shared knowledge and act upon it, with their own particular advantages and disadvantages. However, they act on different kinds of knowledge than markets and hierarchies”. As an Economist article Farrell co-wrote with Cosma Shalizi says:

We now have a technology that does for written and pictured culture what largescale markets do for the economy, what large-scale bureaucracy does for society, and perhaps even comparable with what print once did for language. What happens next?

Some suggestions follow and I strongly recommend you read the whole thing. However, if we return to what I and others were saying in 2016 and 2017, it may be that we were asking the wrong question. Perhaps the big changes of behaviour required of us to operate as economic beings have already happened (the start of the “slow singularity” of the industrial revolution) and the removal of alternatives that required us to spend increasing proportions of our time within and interacting with bureacracies and other large organisations were the logical appendage to that process. These processes are merely becoming more advanced rather than changing fundamentally in form.

And the third part, ie language? What started with the emergence of Late Modern English in the 1800s looks like it is now being accelerated via a new way of complex information processing applied to written, pictured (and I would say also heard) culture.

So the future then becomes something not driven by technology, but by our decisions about which processes we want to allow or even encourage and which we don’t, whether those are market processes, organisational processes or large language processes. We don’t have to have robo actuaries or even robo actuarial analysts, but we do have to make some decisions.

And students entering this arena need to prepare themselves to be participants in those decisions rather than just victims of them. A subject I will be returning to.

Title page vignette of Hard Times by Charles Dickens. Thomas Gradgrind Apprehends His Children Louisa and Tom at the Circus, 1870

It was Fredric Jameson (according to Owen Hatherley in the New Statesman) who first said:

“It seems to be easier for us today to imagine the thoroughgoing deterioration of the earth and of nature than the breakdown of late capitalism”. I was reminded of this by my reading this week.

It all started when I began watching Shifty, Adam Curtis’ latest set of films on iPlayer aiming to convey a sense of shifting power structures and where they might lead. Alongside the startling revelation that The Land of Make Believe by Bucks Fizz was written as an anti-Thatcher protest song, there was a short clip of Eric Hobsbawm talking about all of the words which needed to be invented in the late 18th century and early 19th to allow people to discuss the rise of capitalism and its implications. So I picked up a copy of his The Age of Revolution 1789-1848 to look into this a little further.

The first chapter of Hobsbawm’s introduction from 1962, the year of my birth, expanded on the list:

Words are witnesses which often speak louder than documents. Let us consider a few English words which were invented, or gained their modern meanings, substantially in the period of sixty years with
which this volume deals. They are such words as ‘industry’, ‘industrialist’, ‘factory’, ‘middle class’, ‘working class’, ‘capitalism’ and ‘socialism’. They include ‘aristocracy’ as well as ‘railway’, ‘liberal’ and
‘conservative’ as political terms, ‘nationality’, ‘scientist’ and ‘engineer’, ‘proletariat’ and (economic) ‘crisis’. ‘Utilitarian’ and ‘statistics’, ‘sociology’ and several other names of modern sciences, ‘journalism’ and ‘ideology’, are all coinages or adaptations of this period. So is ‘strike’ and ‘pauperism’.

What is striking about these words is how they frame most of our economic and political discussions still. The term “middle class” originated in 1812. Noone referred to an “industrial revolution” until English and French socialists did in the 1820s, despite what it described having been in progress since at least the 1780s.

Today the founder of the World Economic Forum has coined the phrase “Fourth Industrial Revolution” or 4IR or Industry 4.0 for those who prefer something snappier. Its blurb is positively messianic:

The Fourth Industrial Revolution represents a fundamental change in the way we live, work and relate to one another. It is a new chapter in human development, enabled by extraordinary technology advances commensurate with those of the first, second and third industrial revolutions. These advances are merging the physical, digital and biological worlds in ways that create both huge promise and potential peril. The speed, breadth and depth of this revolution is forcing us to rethink how countries develop, how organisations create value and even what it means to be human. The Fourth Industrial Revolution is about more than just technology-driven change; it is an opportunity to help everyone, including leaders, policy-makers and people from all income groups and nations, to harness converging technologies in order to create an inclusive, human-centred future. The real opportunity is to look beyond technology, and find ways to give the greatest number of people the ability to positively impact their families, organisations and communities.

Note that, despite the slight concession in the last couple of sentences that an industrial revolution is about more then technology-driven change, they are clear that the technology is the main thing. It is also confused: is the future they see one in which “technology advances merge the physical, digital and biological worlds” to such an extent that we have “to rethink” what it “means to be human”? Or are we creating an “inclusive, human-centred future”?

Hobsbawm describes why utilitarianism (” the greatest happiness of the greatest number”) never really took off amongst the newly created middle class, who rejected Hobbes in favour of Locke because “he at least put private property beyond the range of interference and attack as the most basic of ‘natural rights'”, whereas Hobbes would have seen it as just another form of utility. This then led to this natural order of property ownership being woven into the reassuring (for property owners) political economy of Adam Smith and the natural social order arising from “sovereign individuals of a certain psychological constitution pursuing their self-interest in competition with one another”. This was of course the underpinning theory of capitalism.

Hobsbawm then describes the society of Britain in the 1840s in the following terms:

A pietistic protestantism, rigid, self-righteous, unintellectual, obsessed with puritan morality to the point where hypocrisy was its automatic companion, dominated this desolate epoch.

In 1851 access to the professions in Britain was extremely limited, requiring long years of education to support oneself through and opportunities to do so which were rare. There were 16,000 lawyers (not counting judges) but only 1,700 law students. There were 17,000 physicians and surgeons and 3,500 medical students and assistants. The UK population in 1851 was around 27 million. Compare these numbers to the relatively tiny actuarial profession in the UK today, with around 19,000 members overall in the UK.

The only real opening to the professions for many was therefore teaching. In Britain “76,000 men and women in 1851 described themselves as schoolmasters/mistresses or general teachers, not to mention the 20,000 or so governesses, the well-known last resource of penniless educated girls unable or unwilling to earn their living in less respectable ways”.

Admittedly most professions were only just establishing themselves in the 1840s. My own, despite actuarial activity getting off the ground in earnest with Edmund Halley’s demonstration of how the terms of the English Government’s life annuities issue of 1692 were more generous than it realised, did not form the Institute of Actuaries (now part of the Institute and Faculty of Actuaries) until 1848. The Pharmaceutical Society of Great Britain (now the Royal Pharmaceutical Society) was formed in 1841. The Royal College of Veterinary Surgeons was established by royal charter in 1844. The Royal Institute of British Architects (RIBA) was founded in 1834. The Society of Telegraph Engineers, later the Institute of Electrical Engineers (now part of the Institute of Engineering and Technology), was formed in 1871. The Edinburgh Society of Accountants and the Glasgow Institute of Accountants and Actuaries were granted royal charters in the mid 1850s, before England’s various accounting institutes merged into the Institute of Chartered Accountants in England and Wales in 1880.

However “for every man who moved up into the business classes, a greater number necessarily moved down. In the second place economic independence required technical qualifications, attitudes of mind, or financial resources (however modest) which were simply not in the possession of most men and women.” As Hobsbawm goes on to say, it was a system which:

…trod the unvirtuous, the weak, the sinful (i.e. those who neither made money nor controlled their emotional or financial expenditures) into the mud where they so plainly belonged, deserving at best only of their betters’ charity. There was some capitalist economic sense in this. Small entrepreneurs had to plough back much of their profits into the business if they were to become big entrepreneurs. The masses of new proletarians had to be broken into the industrial rhythm of labour by the most draconic labour discipline, or left to rot if they would not accept it. And yet even today the heart contracts at the sight of the landscape constructed by that generation.

This was the landscape upon which the professions alongside much else of our modern world were constructed. The industrial revolution is often presented in a way that suggests that technical innovations were its main driver, but Hobsbawm shows us that this was not so. As he says:

Fortunately few intellectual refinements were necessary to make the Industrial Revolution. Its technical inventions were exceedingly modest, and in no way beyond the scope of intelligent artisans experimenting in their workshops, or of the constructive capacities of carpenters, millwrights and locksmiths: the flying shuttle, the spinning jenny, the mule. Even its scientifically most sophisticated machine, James Watt’s rotary steam-engine (1784), required no more physics than had been available for the best part of a century—the proper theory of steam engines was only developed ex post facto by the Frenchman Carnot in the 1820s—and could build on several generations of practical employment for steam engines, mostly in mines.

What it did require though was the obliteration of alternatives for the vast majority of people to “the industrial rhythm of labour” and a radical reinvention of the language.

These are not easy things to accomplish which is why we cannot easily imagine the breakdown of late capitalism. However if we focus on AI etc as the drivers of the next industrial revolution, we will probably be missing where the action really is.