Front page of the April 2026 issue of Brum Group News

Three and a half years ago, I wrote a piece likening the rapid climate change on Earth to the fairly well-established science fiction concept of terraforming, but in reverse. So what has happened since? Well last summer, according to researchers at Imperial College and the London School of Hygiene and Tropical Medicine, two thirds of the 24,400 heat deaths from June to August across Europe were due to human-made global heating. And a study published last month has suggested that the pace of global warming has nearly doubled since 2015.

It this point I would like to suggest rehabilitating an old word to describe this process, in the opposite direction to terraforming (which is action designed to make a planet more habitable). Barrenize means to make barren or sterile and was used between the mid 1600s and the early 1700s according to the Oxford English Dictionary, originally in the context of animal husbandry. I think it’s time to bring this word back.

In a week when a US President has threatened, variously, “blowing everything up and taking over the oil” and that Iranians would be “living in Hell”, to last night saying that “a whole civilisation will die tonight”, unless they opened the Strait of Hormuz, it certainly sounds like a commitment to barrenization to me, only at a faster pace than the global warming he is already doing everything possible to accelerate further.

On Friday this week, the Birmingham Science Fiction Group will have Oliver Bettis as its guest speaker. Oliver has been a leading actuary in the field of sustainability for many years. He is one of the authors of a series of publications by the actuarial profession in collaboration with the University of Exeter in recent years.

Climate Scorpion shows how we need to develop a best guess about the worst-case scenarios and make policy on that basis, given our lack of knowledge about extreme climate risk and tipping points.

Planetary Solvency – finding our balance with nature sets out an approach to civilisational risk management which attempts to address the fact that the severity and frequency of extreme events are unprecedented and beyond current model projections.

Parasol Lost, which we will be discussing in particular this Friday, focuses on the cooling effect of aerosols: a side-effect of pollution from fossil fuel burning. Without aerosol cooling the global temperature would be around 0.5°C higher than the 1.4°C increase above pre-industrial temperature that we have today. It is critically important to recognise that, as air pollution is cleaned up, this may ironically lead to a short-term increase in warming through the loss of aerosol cooling. The question must be asked, can we afford to lose this cooling and if not, should this be replaced by working with nature, using technology or both?

This will allow us to tap into the rich history of science fiction literature on terraforming (and dealing with the threat of barrenization) and whether this can allow us to look at this question in a new way. It should be a lively discussion.

This event will be held in-person at the Friends of the Earth Warehouse, 54-57 Allison Street,
Birmingham B5 5TH and simultaneously on Zoom, with online access opening from around 7.45 for an 8 pm start.

Ticket prices for non-members are £8 for in-person attendance and £6 for Zoom attendance. For members it’s £4 in-person attendance and free Zoom attendance.

Tickets can be purchased on the door or via the Eventbrite link below:

https://www.eventbrite.co.uk/e/1985958692911

And if this whets your appetite for more science fiction and you think you might like to join the group, just email us at contact@brumsfgroup.org.uk. Hope to see you there!

I have caught Covid for the third time this week, so naturally my thoughts have turned to how it all began.

There are a few Covid posts starting to turn up online as the 6th anniversary of it all rumbles around. The British Foreign Policy Group have helpfully published a timeline from which I have taken everything that happened before Boris Johnson locked us down for the first time:

So a lot had happened by 23 March. You will all have your favourite bits from the saga above, I think mine is 22 January, when Public Health England announced they had moved the risk level to the general public from very low to low.

I remember teaching a macroeconomics class on 12 March when we knew it was going to be the last session on campus. The penny hadn’t dropped. Students were asking about how they would hand work in. We agreed it would have to be online. Some lecturers were talking about microwaving paper submissions to sterilise them. We had a little giggle about that. I had spoken to Stuart McDonald (now MBE) earlier that day where we had reluctantly agreed to postpone his visit to campus to speak to the Leicester Actuarial Science Society (LASS). Stuart would of course become one of the actuarial stars of the pandemic for his work with the COVID-19 Actuaries Response Group. I had a similar conversation by email with Lord Willetts, who was Chancellor at the University of Leicester at the time and who was going to talk to LASS about his books The Pinch and A University Education. We talked of postponing rather than cancelling. The realisation that everything was changing for the foreseeable future was still not there.

It took a long time for the penny to drop for the Government as well. As this analysis of the establishment of the “Covid Disinformation Ecosystem” says:

January featured fear and disbelief, February proved covid couldn’t simply be ignored, March was when governments realised the hospitalisation rate could overwhelm healthcare.

And a Government that was slow to respond initially was very vulnerable to the groups which sprung up during 2020 and 2021. As the Counter Disinformation Project says:

And the main initial target for the UK section of the ecosystem was Boris Johnson who was meeting privately with newspaper owners and editors. Enough doubt was put into Johnson’s mind that he dithered and delayed when cases began to rise, leading to a private meeting with Heneghan, Gupta and Sweden’s Anders Tegnell in September before he chose to ignore his scientific advisors’ calls for a circuit breaker lockdown. In the run up to the deadliest weeks of the pandemic the papers were calling for Johnson to “Save Christmas’.

However I don’t want to focus on our collective inability to make decisions during crises this time. This time I want to focus on the impact of the pandemic on our mental health.

By coincidence, today the 386 page Module 3 report from the Covid Inquiry on The impact of the Covid-19 pandemic on the healthcare systems of the United Kingdom was published. The longer this Inquiry goes on, the more it appears to resemble a truth and reconciliation commission rather than something likely to improve the handling of future pandemics. It gets past transgressions on the record, but in a way designed to move us on rather than improve our preparedness and organisation. I certainly saw nothing in the summaries that I didn’t already know. Module 3 has made 10 recommendations. The only one which mentions mental health at all is the last one on Psychological and emotional support for healthcare workers.

Looking through the module titles, it would seem that this is unlikely to be rectified until Module 10 – Impact on society – reports, currently scheduled for the first half of 2027. I find this relegation of our collective trauma to the lowest priority astonishing.

Two years ago, the Centre for Mental Health produced a review of the evidence so far on COVID-19 and the Nation’s Mental Health. They noted that:

Data on the prevalence of mental health difficulties is harder to assess. For children and young
people, surveys in England have provided a time series since 2020 that suggests very strongly that
mental ill health is indeed more prevalent now than it was before the start of the pandemic. A steady
rise in the decade prior to 2020 seems to have been followed by a sharp rise, and numbers have
stayed high ever since. We do not have the equivalent data for adults, meaning that a clear picture
has yet to emerge, but there is persuasive evidence that levels of mental ill health have been rising
over the last decade, and the pandemic has contributed to many of the risk factors people face.

Before concluding as follows:

Crucially, the pandemic exposed fault-lines in the nation’s mental health, and the stark inequalities
faced every day by people living with mental illness. The public’s mental health was deteriorating
in the years running up to the pandemic, and mental health services were struggling to deal with
the consequences of many years of underfunding and austerity measures across public services.
People with a mental illness were already dying 15-20 years sooner than the general population, and
facing widespread hardship. The pandemic exacerbated these inequalities, creating new risks to
people’s mental health and reducing access to support.

We now have the opportunity to learn from this experience and build a mentally healthier future.
We can act now to boost the public’s mental health in the aftermath of the pandemic, protecting
those who have experienced the worst effects and offering better support to groups that don’t yet
have access to the right support. And we can incorporate mental health into preparations for future
emergencies, so that responses are psychologically informed from day one.

They also made 10 recommendations, mostly for the NHS and Department for Health and Care, but also covering education, communications and considerations for the upcoming (at the time) review of the Mental Health Act. Less than half of these recommendations have been addressed at all.

Now we are two years on from that report, what has changed?

Well, Roy Lilley has drawn a rather dispiriting picture for us. He draws attention to Wes Streeting’s announcement in the Health Service Journal on 12 March, that the proportion of the NHS budget spent on mental healthcare would be cut for the third year in a row. Lilley lists how the demands on mental health services have mushroomed since before the pandemic:

  • Around two million people were in touch with mental health services in 2019, today it’s around three million;
  • Child and Adolescent Services: in 2019 around 500,000 referrals. Now around a million;
  • And only around 45% of referrals are accepted, meaning the true demand is even higher;
  • Talking therapies are up by 60%; and
  • Crisis team referrals and sectioning under the Mental Health Act are also up 60%.

And he summarises the problem like this:

The total economic cost of mental ill-health in England in 2022 was estimated ~£300bn a year when lost productivity, welfare and wider costs are factored in.

The total MH budget is about £16bn. Meaning, the NHS is spending roughly £1 trying to address a £18 national problem.

It feels like we are still waiting for the penny to drop.


Source: https://markets.ft.com/data/equities/tearsheet/summary?s=IBM:NYQ

A week or so ago I referred to a “Thought Exercise” set in June 2028 “detailing the progression and fallout of the Global Intelligence Crisis” (ie science fiction), published on 23 February, which may have tanked the share price of IBM later that day. As I said then, the fall definitely happened, with IBM’s share price falling 13%, its biggest fall since 2000. I said then that the likelihood of the scenario portrayed was difficult to assess, but the speed with which the total economic collapse was described felt unlikely if not impossible. I would like to expand on that.

The main reason that the scenario was hard to assess was that it was not based on data or evidence at all. That is unavoidable for speculative fiction talking about things that are not currently happening, but when describing an economy only two years away where most of the processes described should be discernible to some extent already, it is totally avoidable.

Ed Zitron has done an excellent line by line take down of the Citrini piece here. Here is one page of that to give you a flavour:

However this lack of a link with anything tangible did not stop the financial markets panicking, which should cause us pause when relying on the financial markets’ valuation of projects, industries, government policies, etc.

Ed Zitron describes this kind of piece as analyslop: “when somebody writes a long, specious piece of writing with few facts or actual statements with the intention of it being read as thorough analysis”. It can then get picked up by other commentators which take it as their starting point for further analysis, often making it hard to see that the starting point had few if any data points. Here is an example, from Carlo Iacono, looking at what if just some of the Citrini pronouncements were true, with appendices detailing possible branching paths of outcomes, all generated by a large language model (LLM). And then people start studying the meta analysis, and it starts getting taken even more seriously, and put into models and pretty soon most of the analysis is being done on imagined risks rather than on ones which are already staring us in the face.

We have always had a problem keeping our society grounded in reality, think the 2003 Iraq War, where we went to war on a false assessment about Iraq’s possession of weapons of mass destruction, the 2008 financial crisis, where banks misunderstood the risks they were exposed to, and the last two and a half years, where we, for the most part, seem to have convinced ourselves we have not been facilitating a genocide in Gaza when we clearly have been. But this is only going to get worse with the AI systems which are being developed.

As Nate Hagens points out:

The rapid rise of artificial intelligence has served to dramatically increase the speed of information production while also eroding accuracy, making it difficult to differentiate between content that simply sounds confident and content that’s actually grounded in reality.

So where is AI currently? Well PwC’s global CEO survey from January this year had the following statement as the first bullet amongst its key findings:

Most CEOs say their companies aren’t yet seeing a financial return from investments in AI. Although close to a third (30%) report increased revenue from AI in the last 12 months and a quarter (26%) are seeing lower costs, more than half (56%) say they’ve realised neither revenue nor cost benefits.

That’s the reality. But the hype is much much more entertaining. My favourite spoof video of the AI future currently is this one, about the time where all most of us are good for is riding bicycles to supply the ever increasing energy needs of AI systems (click view in browser if you can’t see it):

And what about the financial journalists? The pieces describing our reaction to whatever is about to unfold economically have already been written. There are investor websites asking if the 2026 crash has already begun, while another recent article argues that “America has quietly become one of the world’s most shock‑resistant economies” (which seems unlikely to age well). What most financial journalists are more comfortable with are articles about how the warnings were ignored after the fact.

And the professions? Well the current overview of my own profession is probably reasonably represented by this piece from the Society of Actuaries in the United States. Unfortunately for them, Daniel Susskind, who is mentioned in the article, is currently suggesting, as part of his Future of Work lecture series for Gresham College, how the key to the sudden development in AI, after the “AI Winter” when progress seemed slow, was that we abandoned trying to make machines which thought and acted like humans in favour of focusing on completing tasks in any way possible. Increasingly we are now automating tasks where we can’t (or won’t) articulate how we do them. From Deep Blue‘s victory over Kasparov in 1997 to Watson winning jeopardy in 2011 to ImageNet beating humans at image recognition (although that is disputed), Susskind refers to this progress as the displacement of purists in favour of what he calls “The Pragmatic Revolution”. Pragmatism in this sense appears to be that we humans should just accept the consequences the people running these systems want. So, as his latest lecture “Work, out of reach” claims, people moving into cities to find work is a strategy which is no longer going to work for low skilled people:

He then shows this graphic demonstrating the lack of recovery of big coal mining areas in the UK:

Source: Left – Sheffield Hallam University map of coal mining areas; Right – % employment from Overman and Xu (2022)

And finally he cites the notorious Policy Exchange piece from 2007, Cities Unlimited, whose thesis was that there is apparently no realistic prospect of regenerating towns and cities outside London and the South East.

Susskind talks about three forms of technological unemployment:

  1. skills-mismatch, where your skills are mismatched to the work available. Education and training has always been the answer to this in the past.
  2. place-mismatch, where the jobs are not where you have built your life. Some believe the answer should always be the one proposed by Norman Tebbit, who memorably told everyone in 1981, “I grew up in the 30s with an unemployed father. He did not riot. He got on his bike and looked for work.”
  3. identity-mismatch, where according to Susskind, people are prepared to stay out of work to protect their identity, citing US men who won’t take “pink collar” work, China “rotten tail” kids, Japanese seishain-or-nothing and Indian Sarkari Naukri queues in India. Or perhaps they are just looking for work which is consistent with the idea of human dignity.

Susskind claims to have no answer to any of these as far as AI is concerned. They are, in his view, just the inevitable outcomes of his “Pragmatic Revolution”. It is the unthinking pursuit of more and more growth funded by capital less and less tethered to any territory, principle or purpose, where any grit in the machinery, be it unions or protestors or, increasingly, the wrong sort of government must be trampled underfoot. Anything which impedes the helter-skelter rush to more and more at greater and greater speed. It’s like our whole economy is run by this guy (press the view in browser link if you can’t see him) shouting “Ready, Aim, Fire!”:

But unskilled people will not be the only collateral damage of these unguided weapons. Take markets for instance. These are where people are exposed to risks and rewards based on underlying conditions they only partially understand. Greed and fear may be their main motivations, but gossip and group think are their main communication channels. They don’t need facts, particularly when so many of the facts are proprietary information not in the public domain. A plausible narrative will do. And plausible narratives are what LLMs will do for you in abundance.

And the more we reward people who can move fast, eg to spot an arbitrage opportunity, even at the risk of breaking things, rather than people who can make decisions which still look good decades from now, the more we are setting up the conditions for AI systems to be the go-to tool.

And put that together with an AI industry which desperately needs funding capital to keep arriving, ie one which is unbelievably highly motivated to push plausible narratives even when they know they are not grounded in reality, and you have a recipe for market-generated chaos.

And then we have Trump’s new war. Beware the people who are war gaming the Middle East at the moment on a range of LLMs (just stop and think for a moment about the bloodless inhuman impulse behind carrying out such an exercise rather than, I don’t know, talking to some actual people who live or have lived recently in and around the region). One of the worst offenders is Heavy Lifting banging on about what the three scenarios are for Operation Epic Fury. This is as bad as it sounds:

I tasked her [he is talking about Gemini Pro here] with doing a literature review on regime change (a term often used by the President but not a well-defined one), creating three scenarios of possible outcomes for which each was give a percentage probability, and a list of 20 items to examine for each scenario that covered political, economic, and cultural issues with a special focus on the political consequences in the U.S. and what this means for China, our biggest geopolitical rival.

But Gemini Pro wasn’t the only one involved in this. Two other humans were, Tim Parker and Ron Portante, trainers at the gym I go to. (Just as a personal aside, Tim was my coach in hitting six plates [345 pounds] on the sled last Friday and I have a video to prove it!) I was talking about the piece and Ron raised the issue of linguistic and cultural diversity in Iran. Tim did some real time research for me on his phone while I was burning real calories under his strict tutelage. This made me think I needed a background section on Iran. When I got him Gemini and I added it.

What you mean you belatedly realised you might need to have done some actual research into Iran rather than just generic research on regime change? I stopped reading at that point.

Meanwhile King’s College London have been carrying out war games more systematically using AI. Professor Kenneth Payne from the Department of Defence Studies led the study, which looked at how LLMs would perform in simulated nuclear crises. As Professor Payne said:

Nuclear escalation was near-universal: 95% of games saw tactical nuclear use and 76% reached strategic nuclear threats. Claude and Gemini especially treated nuclear weapons as legitimate strategic options, not moral thresholds, typically discussing nuclear use in purely instrumental terms. GPT-5.2 was a partial exception, limiting strikes to military targets, avoiding population centers, or framing escalation as “controlled” and “one-time.” This suggests some internalised norm against unrestricted nuclear war, even if not the visceral taboo that has held among human decision-makers since 1945.

This is not a Pragmatic Revolution. These AI systems cannot replace humans thinking about the future we want for humans in any way which is worth having. What they can do, if we let them, is accelerate our worst impulses and move us further away from considered reflective decision making.

But we will continue to use AI systems in the military because, as it turns out, it is very useful for low stakes admin. So although Lavender, the system used by the Israeli military to select targets in Gaza, made errors in 10% of cases and was therefore totally inappropriate to the task, there are lots of organisational logistical tasks where it is much quicker than the alternative and 10% error rates do not matter so much.

There is clearly an issue with what we decide to use these systems for. We need to be able to regulate the decisions which are particularly consequential. However the only way we seem to be considering for this at the moment is the human-in-the-loop model, like the humans spending around 20 seconds considering each target recommended by Lavender before authorizing a bombing. I have written about these before in the context of early career professionals in the finance industry, where the prospect seemed miserable enough:

They will be paid a lot more. However, as Cory Doctorow describes here, the misery of being the human in the loop for an AI system designed to produce output where errors are hard to spot and therefore to stop (Doctorow calls them, “reverse centaurs”, ie humans have become the horse part) includes being the ready made scapegoat (or “moral crumple zone” or “accountability sink“) for when they are inevitably used to overreach what they are programmed for and produce something terrible.

However it seems obvious to me that, in the context of dropping actual bombs on actual people, there is an even more serious problem with this model. As Simon Pearson (anti-capitalist musings) puts it:

The “human in the loop” requirement exists in military doctrine because international humanitarian law demands an accountable human decision-maker for lethal force. The laws of armed conflict require proportionality assessments, precautionary measures, distinction between combatants and civilians. All of these obligations attach to a human commander. The system cannot fulfil them. So a human must be present, and their presence must constitute a decision, regardless of whether any genuine decision was made.

What the institution needs from the analyst is not judgment. It is a signature. The signature converts a machine output into a human act. And a human act is what the law recognises, whether or not any judgment occurred. When the strike kills children, the chain of accountability runs to the analyst who approved the target: not to the system that identified it, not to the company that built the system, not to the doctrine that compressed the review window to ten seconds.

But whether we want to make money from exploiting a short term anomaly in a market, make our fellow humans redundant, prosecute a war on another group of fellow humans or “win” a war of mutual nuclear destruction, we need to retain the capacity for real human reflection within the decision-making processes we use. Not just a human-in-the-loop nor just the elites of tech companies deciding how the systems will be configured behind commercially confidential walls. These processes need democratic accountability every bit as much as our parliaments, councils, institutions and voting systems do.

Something infuriatingly slow, inclusive and deliberative giving recommendations which are then stress-tested for how they would perform on contact with reality, involving yet more people being serious and deliberative and taking their responsibilties more seriously than being a human-in-the-loop would ever allow. Our decision-making systems need more grit and less oil. AI is all oil.

The Actuary magazine recently had a debate about whether the underlying data or the story you wove around it was more important. I’m not sure there is always a clear distinction between the two, as Dan Davies rather neatly illustrates here, but my view is that, if a binary choice has to be made, it is always going to be the story. And there was a great example of this which popped up recently in the FT.

The FT article was ‘Is university still worth it?’ is the wrong question, by John Burn-Murdoch, with great graphs as usual by John. However, as is sometimes the case, I feel that a very different and more convincing story could be wrapped around the same datasets he is showing us.

The article’s thesis is as follows:

The graduate earnings premium, ie how much more on average graduates earn than non-graduates, has only fallen in the UK as the proportion going to university has risen. It has risen in other countries:

In the UK, we have had much weaker productivity growth than the other comparator countries, and also “the steady ramping up of the minimum wage has squeezed the earnings premium from the lower end too”:

We have also had a much smaller increase in the percentage of managerial and professional jobs than a different group of comparator countries (they haven’t mentioned Germany before), meaning graduates are forced to take lower salaried jobs elsewhere:

So the answer according to the FT? We should focus on economic growth rather than “tweaking” higher education intake and funding. Then graduate earnings would be higher, student loans could be more generous(!) and students would have more chance of getting a good job.

Well perhaps. But here’s a different framing of the same data that I find more persuasive.

Let’s start by addressing that point about the minimum wage. According to the House of Commons Library report on this, the UK’s minimum wage is broadly comparable to that of France and the Netherlands, although higher than Canada’s and much higher than that of the United States. The employers who are the FT’s constituency would obviously like us lower down this particular chart:

The main economic framing here is the progress myth of the UK’s business community: economic growth. All problems can be solved if we can just get more economic growth. Apparently we need more inequality in pay between graduates and non-graduates which we can get by generating more economic growth. This is honest of them at least, although I don’t see much evidence that the economic growth they crave will go into skilled job creation rather than stock buy backs (according to Motley Fool, “Companies spent $249 billion on stock buybacks in Q3 2025, and $777 billion over the first three quarters of 2025.”).

There are a lot of problems with framing every economic question with respect to economic growth, memorably illustrated by Zack Polanski of the Green Party in this less than 3 minute video recently (I strongly recommend you watch it before you read on – click on the read in browser link if you can’t see it):

Economic growth is increasingly without purpose, wasteful of energy and poorly distributed. It is chasing outputs, literally any outputs, whatever the cost to the environment, our health system, our education system, our social support systems and our communities. Looking at the framing above, you can see that economic growth as currently pursued will always see anything which stops the concentration of wealth amongst the already wealthy, like a higher national minimum wage or a totally made-up concept like a lower graduate earnings premium (which in itself is a framing trying to make reducing inequality seem undesirable) as a problem. Lack of productivity growth, itself a proxy for this kind of economic growth (because if you ask why we need more productivity the answer is always to get more economic growth), is usually directed as a criticism at “lazy” UK workers, rather than under-investing and over-extracting UK business owners.

But what if, instead of economic growth, your progress myth was reducing inequality? Or growing equality within the economy?

Source: World Inequality Database wid.world

If you focused on inequality rather than economic growth, then you would find it correlates with everything we say we don’t want. Unlike economic growth, having equality as an aim actually has the advantage of having an evidence base for the claim that it improves society:

Source: https://media.equality-trust.out.re/uploads/2024/07/The-Spirit-Level-at-15-2024-FINAL.pdf

If you focused on inequality, then you would be pleased that we have had an increase in our minimum wage. You would think that the same FT article’s admission that UK graduates’ skills levels are higher than those in the United States was more important than something called a graduate earnings premium.

Burn-Murdoch is right to say asking whether university is worth it is the wrong question.

However economic growth is the wrong answer.

And I thought I would probably be stopping there for this week. But then something odd happened. A “Thought Exercise” set in June 2028 “detailing the progression and fallout of the Global Intelligence Crisis” (ie science fiction), published on 23 February, may have tanked the share price of IBM later that day. The fall definitely happened, with IBM’s share price falling 13%, its biggest fall since 2000, alongside smaller falls in other tech stocks.

Source: https://markets.ft.com/data/equities/tearsheet/summary?s=IBM:NYQ

According to the FT:

Investors have recently seized on social media rumours and incremental developments by small AI companies to justify further selling, with a widely circulated blog post by Citrini Research over the weekend describing how AI could hypothetically push the US unemployment rate above 10 per cent by 2028, proving the latest catalyst.

The likelihood of the scenario portrayed is difficult to assess, but the speed with which the total economic collapse happens subsequently as described feels unlikely if not impossible. However the fact that the markets are this jittery tells us something I think. As Carlo Iacono puts it:

We are living through a period in which the gap between “plausible narrative” and “tradeable signal” has collapsed to nearly nothing. When a scenario feels real enough to model, and the underlying anxiety is already there waiting to be organised, fiction and forecast become functionally indistinguishable.

The data underlying the markets hasn’t changed, but the story has. I rest my case.

The disappointing brandy scene from Goldfinger (1964) https://youtu.be/I6COBucJQfE?si=saiV5f80ISSB3FGY

Politics is a bit depressing this week, so I thought instead I would focus on the asymmetry of our attitudes towards different high octane liquids.

I remember when I first got interested in wine. It was the early noughties and I was out at a restaurant in Cardiff called Le Cassoulet (no longer trading under that name I understand) with my then boss who liked to hit his expense account pretty hard from time to time. The sommelier seemed to know him quite well and scurried off to get him some particularly old claret to accompany the meal. I think it was from 1972 or thereabouts. I remember noting that it had a different colour (brown) from the red wine I was used to drinking and, when sipped, there were a lot of different flavours and smells competing for my attention. Something which I later heard described as “complexity”. From then on I realised that wine drinking could involve something a bit more than just something nice in a glass to accompany a meal.

The journey of alcoholic drinks from drinks to luxury consumer items and assets is nicely illustrated by the Bond franchise. There are a number of movies we could choose but let’s go for Goldfinger, shall we?

In the disappointing brandy scene from Goldfinger, we have this exchange between M, Bond and the Governor of the Bank of England, Colonel Smithers:

Smithers: “Have a little more of this rather disappointing brandy.”

M: “What’s the matter with it?”

Bond: “I’d say it was a 30-year-old Fine indifferently blended, sir…with an overdose of Bons Bois.”

M: “Colonel Smithers is giving the lecture, 007.”

Now first of all, that is clearly not what the Governor of the Bank of England looks like. As readers of this blog already know, he looks like this:

That scene is also notable for including a brief discussion of how the relative value of gold held at the US and British central banks at the time was used “to establish respectively the true value of the dollar and the pound”. In 1964 this would have been via the London Gold Pool, running between 1961 and 1968, by which a group of eight central banks including the United States Fed and the Bank of England agreed to cooperate in maintaining the Bretton Woods System of fixed-rate convertible currencies and defending the gold price. Ian Fleming’s book, written in 1959, predated this arrangement, but the anxieties about the gold market which led to its creation would have been very much around. So we still have the Governor (meeting Bond alone rather than with M) saying (during a lecture which went on for 10 pages):

We can only tell what the true strength of the pound is, and other countries can only tell it, by knowing the amount of valuta we have behind our currency.

Valuta is a rare word, from American English, for the value of one currency in terms of its exchange rate with another, and perhaps an odd one for the Governor of the Bank of England to use. But it is clear that Bond is sent after Goldfinger primarily for economic reasons (finding a way to smuggle large amounts of gold across borders threatens the Bank of England’s cosy little gold club) rather than because (spoiler alert) Goldfinger thinks nothing of murdering people (quite a lot of people in the case of Operation Grand Slam) who get in his way, cheating at golf, employing butlers with lethal bowlers, slicing through things with gold lasers and planting nuclear devices in Fort Knox. Released shortly after Ian Fleming’s death, it was the last Bond movie he saw in production.

It is the same film in which Bond obsesses about getting his favourite champagne (Dom Perignon 1953 – Bond was also someone not afraid to hit his expense account pretty hard from time to time) chilled to 38°F (3.3°C) before he gets bashed on the back of the head and the girl he is with (Goldfinger’s assistant, Jill Masterson, played by Shirley Eaton) gets sprayed from head to toe with gold paint. Perhaps more than any other brand, Bond linked luxury and high octane liquids of various kinds.

Skip forward a few decades and some of it has clearly stopped being something to drink at all, but instead a, very fragile, status asset for the very rich to demonstrate their status to each other. Here are the top prices achieved by wine at auction from one website, 8 of the 10 of them pre-dating both me and Goldfinger:

Source: vinovest https://www.vinovest.co/blog/25-most-expensive-wines-in-the-world-2026

Contrast this with the way we have treated fossil fuels. As Luke Kemp points out in Goliath’s Curse:

We tend to forget that fossil fuels come primarily from long-dead plants and animals. These organisms died between 360 and 286 million years ago during the Carboniferous period, after capturing sunlight through photosynthesis or other means. It is that fossilised energy that we are consuming. According to one estimate, it would take 400 years of global photosynthesis to power the modern world for one year. It takes ninety-eight tons of organic matter buried during the Carboniferous to become just five litres of petrol. We are now a high-energy Goliath, powered by dead matter.

According to a petrol price checker from earlier this week, the garage closest to me currently sells unleaded petrol for £1.29 a litre. So 98 tones of organic matter curated for 300 million years retails for £6.45. That’s less than half the price of a sausage bap and a coffee from Costa via UberEats:

But apparently it’s still not cheap enough.

Most of the content from this article recommending eternal vigilance despite the cheapest prices for 5 years and the claims that “petrol is still 6p too high at the pumps” comes from Howard Cox, founder of FairFuelUK. Whose website includes this picture with a not-too-presumptious-claim-at-all below it:

Even if you weren’t concerned with climate change or the health effects of petrol fumes in the air, this seems like a strange hill for anyone to be dying on. And dying we are. According to the 2025 Global Report of the Lancet Countdown average global heat-related mortality has now reached 546,000 pa, up 63% in just over 20 years:

And that’s just heat. A recent report from the Royal College of Physicians: A Breath of Fresh Air estimated 30,000 deaths from air pollution each year, of which car emissions form an important component.

By the time even the Bond franchise had started worrying about environmental concerns in 2008 with Quantum of Solace, a Somerset Maughamish short story converted into an attempt by a sinister organisation to become the water monopoly in Bolivia through underhand means, the iconic shot of the woman covered in gold had become a female consular employee (Strawberry Fields, played by Gemma Arterton) drowned in oil:

Source; http://007magazine.co.uk/factfiles/factfiles_trivia5.htm

We currently pay between £3.12 and £7.09 per litre in duty on wine, depending on strength, and £0.53 per litre in duty on petrol.

Our attitude to different types of high octane liquids has clearly been nuts in all kinds of ways for a long time. But it is just part of our political frostbite at the moment: we allow our living organisations and institutions to remain frozen in time because we have always done things that way, regardless of the living tissue we are killing in the process. From the endless cycle of public inquiries and ignored recommendations to our use of economics to rationalise things we have already decided to do to batting on with traditional exams: it seems we are just going to do what we are going to do. And freezing fuel duty now looks like it needs to be added to that list.

We can laugh at Trump for accepting an award of “undisputed champion of beautiful clean coal” by the Washington Coal Club and legislating that black is now white by revoking the Environmental Protection Agency’s scientific ruling from 2009 about the harms of climate change. But Trump does at least think he needs a reason to support the fossil fuel industry, even if he needs to make one up. We are just doing it because our politics has gangrene.

I have spent many days in rooms with groups of men (always men) anxious about their future income, where I advised them on how much to ask their companies for. Most of my clients as a scheme actuary were trustees of pension schemes of companies which had seen better days, and who were struggling to make the necessary payments to secure the benefits already promised, let alone those to come. One by one, those schemes stopped offering those future benefits and just concentrated on meeting the bill for benefits already promised. If an opportunity came to buy those benefits out with an insurance company (which normally cost quite a bit more than the kind of “technical provisions” target the Pensions Regulator would accept), I lobbied hard to get it to happen. In many cases we were too late though, the company went bust and we moved it into the Pension Protection Fund instead. That was the life of a pensions actuary in the West Midlands in the noughties. I was often “Mr Good News” in those meetings, the ironic reference to the man constantly moving the goalposts for how much money the scheme needed to meet those benefits bills. I saw my role as pushing the companies to buy out funding if at all possible. None of the schemes I advised had a company behind them which could sustain ongoing pension costs long term. I would listen to the wishful thinking and the corporate optimism, smile and push for the “realistic” option of working towards buy out.

Then I went to work at a university, and found myself, for the first time since 2003, a member of an open defined benefit pension scheme. It was (and still is) a generous scheme, but was constantly complained about by the university lecturers who comprised most of its membership. I didn’t see any way that it was affordable for employers which seemed to struggle to employ enough lecturers, were very reluctant to award anything other than fixed term contracts, and had an almost feudal relationship with their PhD students and post docs. Staff went on strike about plans to close the scheme to future accrual and replace it with the most generous money purchase scheme I had ever seen. I demurred and wrote an article called Why I Won’t Strike. I watched in wonder when even actuarial lecturers at other universities enthusiastically supported the strike. However, over 10 years later, that scheme – the UK’s biggest – is still open. And I gained personally from continued active membership until 2024.

Now don’t get me wrong, I still think the UK university sector is wrong to maintain, unique amongst its peers, a defined benefit scheme. The funding requirement for it has been inflated by continued accrual over the last 8 years and therefore so has the risk it will spike at just the time when it is least affordable, a time which may soon be approaching with 45% of universities already reporting deficits. However the strike demonstrated how important the pension scheme was to staff, something the constant grumbling before the strike had led university managers to doubt. And, once the decision had been made to keep the scheme open to future accrual, I had no more to add as an actuary. Other actuaries had the responsibility for advising on funding, in fact quite a lot of others as the UCU was getting its own actuarial advice alongside that the USS was getting, but my involvement was now just that of a member, just one with a heightened awareness of the risks the employers were taking.

The reason I bring this up is because I detected something of the same position as my lonely one from the noughties amongst the group of actuaries involved in the latest joint report from the Institute and Faculty of Actuaries and the University of Exeter about the fight to maintain planetary climate solvency.

It very neatly sets out the problem, that the whole system of climate modelling and policy recommendations to date has been almost certainly underestimating how much warming is likely to result from a given increase in the level of carbon dioxide in the atmosphere. Therefore all the “carbon budgets” (amount we can emit before we hit particular temperature levels) have been assumed to be higher than they actually are and estimates for when we exhaust them have given us longer than we actually have. This is due to the masking effects of particulate pollution in the air, which has resulted in around 0.5C less warming than we would otherwise have had by now. However, efforts to remove sulphur from oil and coal fuels (themselves important for human health) have acted to reduce this aerosol cooling effect. The goalposts have moved.

An additional reference I would add to the excellent references in the report is Hansen’s Seeing the Forest for the Trees, which concisely summarises all the evidence to suggest the generally accepted range for climate sensitivity is too low.

So far, so “Mr Good News”. And for those who say this is not something actuaries should be doing because they are not climate experts, this is exactly what actuaries have always done. We started the profession by advising on the intersection between money and mortality, despite not being experts in any of the conditions which affected either the buying power of money or the conditions which affected people’s mortality. We could however use statistics to indicate how things were likely to go in general, and early instances of governments wasting quite a lot of money without a steer from people who understood statistics got us that gig, and a succession of other related gigs over the years ahead.

The difficult bit is always deciding what course of action you want to encourage once you have done the analysis. This was much easier in pensions, as there was a regulatory framework to work to. It is much harder when, as in this case, it involves proposing changes in behaviour which are ingrained into our societies. If university lecturers can oppose something that is clearly not in the long term financial interests of their employers and push for something which makes their individual employers less secure, then how much more will the general public resist change when they can see no good reason for it.

And in this regard this feels like a report mostly focused on the finance industry. The analogies it makes with the 2008 financial crash, constant comparisons with the solvency regulatory regimes of insurers in particular and even the framing of the need to mitigate climate change in order to support economic growth are all couched in terms familiar to people working in the finance sector. This has, perhaps predictably, meant that the press coverage to date has mostly been concentrated in the pension, insurance and investment areas:

However in the case of the 2008 crash, the causes were able to be addressed by restricting practices amongst the financial institutions which had just been bailed out and were therefore in no position to argue. Many of those restrictions have been loosened since, and I think many amongst the general public would question whether the decision to bail out the banks and impose austerity on everyone else is really a model to follow for other crises.

The next stage will therefore need to involve breaking out of the finance sector to communicate the message more widely, perhaps focusing on the first point in the proposed Recovery Plan: developing a different mindset. As the report says:

This challenge demands a shift in perspective, recognising that humanity is not separate from nature but embedded in it, reliant on it and, furthermore, now required to actively steward the Earth system.
To maintain Planetary Solvency, we need to put in place mechanisms to ensure our social, economic, and political systems respect the planet’s biophysical limits, thus preserving or restoring sufficient natural capital for future generations to continue receiving ecosystem services…

…The prevailing economic system is a risk driver and requires reform, as economic dependency on nature is unrecognised in dominant economic theory which incorrectly assumes that natural capital is substitutable by manufactured capital. A particular barrier to climate action has been lobbying from incumbents and misinformation which has contributed to slower than required policy implementation.

By which I assume they mean this type of lobbying:

And this is where it gets very difficult, because actuaries really do not have anything to add at this point. We are just citizens with no particular expertise about how to proceed, just a heightened awareness of the dangers we are facing if we don’t act.

But we can also, as the report does, point out that we still have agency:

Although this is daunting, it means we have agency – we can choose to manage human activity to minimise the risk of societal disruption from the loss of critical support services from nature.

This point chimes with something else I have been reading recently (and which I will be writing more about in the coming weeks): Samuel Miller McDonald’s Progress. As he says “never before have so many lives, human and otherwise, depended on the decisions of human beings in this moment of history”. You may argue the toss on that with me, which is fine, but, in view of the other things you may be scrolling through either side of reading this, how about this for a paragraph putting the whole question of when to change how we do things in context:

We are caught in a difficult trap. If everything that is familiar is torn down and all the structures that govern our day-to-day disintegrated, we risk terrible disorder. We court famines and wars. We invite power vacuums to be filled by even more brutal psychopaths than those who haunt the halls of power now. But if we don’t, if we continue on the current path and simply follow inertia, there is a good chance that the outcome will be far worse than the disruption of upending everything today. Maintaining status-quo trajectories in carbon emissions, habitat destruction and pollution, there is a high likelihood of collapse in the existing structure anyway. It will just occur under far worse ecological conditions than if it were to happen sooner, in a more controlled way. At least, that is what all the best science suggests. To believe otherwise requires rejecting science and knowledge itself, which some find to be a worthwhile trade-off. But reality can only be denied for so long. Dream at night we may, the day will ensnare us anyway.

One thing I never did in one of those rooms full of anxious men was to stand up and loudly denounce the pensions system we were all working within. Actuaries do not behave like that generally. However we have a senior group of actuaries, with the endorsement of their profession, publishing a report that says things like this (bold emphasis added by me):

Planetary Solvency is threatened and a recovery plan is needed: a fundamental, policy-led change of direction, informed by realistic risk assessments that recognise our current market-led approach is failing, accompanied by an action plan that considers broad, radical and effective options.

This is not a normal situation. We should act accordingly.

Source: https://xkcd.com/2415/ licence at: https://creativecommons.org/licenses/by-nc/2.5/

Happy new year all! New year, new banner, courtesy of my brilliant daughter who presented me with a plausible 3-D model of my very primitive cartoon of a reverse-centaur over Christmas. And I thought I would kick off with a relatively uncontentious subject: examinations!

“Back to normal!” That was the cry throughout education when the pandemic had finally ended enough for us to start cramming students into rooms again. The universities had all leveraged themselves to the maximum, and perhaps beyond, to add to the built estate, so as to entice students in both the overseas and the uncapped domestic market to their campuses, and one by-product of this was they had plenty of potential examination halls. So let’s get away from all of that electronic remote nonsense and get everyone in a room together where you can keep an eye on them and stop them cheating. This united the purists who yearned for the days of 10% of the cohort turning up for elite education via chalk and talk rather than the 50% we have today, senior management needing to justify the size of the built estate and politicians who kept referring to traditional exams in an exam hall as the “gold standard”.

So, in a time when students have access to information, tools, how to videos of everything imaginable, the entire output of the greatest minds of thousands of years of human history, as well as many of the less than great minds, in short anything which has ever caught anyone’s attention and been committed to some form of media: in this of all times, we want to sort the students into categories for the existing job market based on how they answer academic questions about what they can remember unaided about the content of their lecture courses and reading lists with a biro on a pad of paper perched precariously on a tiny wooden table surrounded by hundreds of other similar scribblers, for a set period of time as minders wander the floors like Victorian factory owners.

And for institutions that thought the technology we fast-tracked for education delivery and assessment in the pandemic would surely be part of education’s future? Or perhaps they just can’t afford to borrow half a billion or have the the land available to construct more cathedrals of glass and brick to house more examination halls? Simple! We just create the conditions for that gold standard examination right there in the student’s own bedroom or the company they work for!

There are 54 pages to the Institute and Faculty of Actuaries’ (IFoA’s) guidance for remotely invigilated candidates. It covers everything from the minimum specification of equipment you need, including the video camera to watch your every movement and the microphone to pick up every sound you make, to the proprietary spying software (called “Guardian Browser”) you will need to download onto your own computer, how to prove who you are to the system, what you are allowed to have in your bedroom with you and even how you need to sit for the duration of the exam (with a maximum of two 5 minute breaks) to ensure the system has sufficient visibility of you at all times:

These closed book remote arrangements replaced the previous open book online exams which most institutions operated during the pandemic. The reason given was that the exam results shot up so much that widespread cheating was suspected and the integrity of the qualifications was at risk. The IFoA’s latest assessment regulations can be found here.

The belief in examinations is very widespread. A couple of months ago I was discussing the teacher assessments which replaced them briefly during the pandemic with a secondary business studies teacher. He took great pride in the fact that he based his assessments solely on mock results, ie an assessment carried out before all of the syllabus had been covered and when students were unaware it would be the final assessment. But still in his mind more “objective” than any opinion he might have of his own students.

If a large language model can perform enormously better in an examination than your students can without it, what it actually demonstrates is that the traditional examination is woefully unprepared for the future. As Carlo Iacono puts it:

The machines learned from us.

They learned what we actually valued and it turned out to be different from what we said we valued.

We said we valued originality. We rewarded conformity to genre. We said we valued depth. We measured surface features. We said we valued critical thinking. We gave higher marks to confident assertion than to honest uncertainty.

So now the machines produce what the world trained them to produce: fluent, confident, passable output that fits the shapes we reward.

And we’re horrified. Not because they stole something from us. Because they showed us what the systems were selecting for all along.

The scandal isn’t that a model can imitate student writing. The scandal is that we built an educational and professional culture where imitation passes as competence, and then acted shocked when a machine learned to imitate faster.

We trained the incentives. We trained the rubrics. We trained the career ladders.

The pattern recognition which gets you through most formal examinations is just too cheap and easy to automate now. It is no longer a useful skill, even by proxy. It might as well be Hogwarts’ sorting hat for all the use it is in a post scarcity education world. If the machines have worked out how to unlock the elaborate captcha system we have placed around our gold standard assessments, an arms race of security measures protecting a range of tests which look increasingly narrow compared to the capabilities which matter does not seem like the way to go.

What instead we are doing is identifying which students are prepared to put themselves through literally anything to get the qualification. Companies like students like that. They will make ideal reverse-centaurs. The description of life as a reverse-centaur even sounds like the experience of a proctored exam:

Like an Amazon delivery driver, who sits in a cabin surrounded by AI cameras, that monitor the driver’s eyes and take points off if the driver looks in a proscribed direction, and monitors the driver’s mouth because singing isn’t allowed on the job, and rats the driver out to the boss if they don’t make quota.

The driver is in that van because the van can’t drive itself and can’t get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance. But the driver is human, so the van doesn’t just use the driver. The van uses the driver up.

Source: Cory Doctorow, Enshittification

And, even if you are OK with all of that, all of these privacy intrusions don’t even work to prevent cheating! The ACCA, the world’s largest accounting professional body, has just announced it is stopping all remote exams after giving up the arms race against the cheats, facilitated in some cases seemingly by their Big Four employers lying about what had gone on.

Actuarial exams started in 1850, only 2 years after the Institute of Actuaries was established (Dermot Grenham wrote about them recently here). This pre-dated the establishment of the first examination boards by a few years (1856 Society of Arts, the Society for the encouragement of Arts, Manufactures and Commerce, later the Royal Society for the encouragement of Arts, Manufactures and Commerce (Royal Society of Arts); 1857: University of Oxford Delegacy of Local Examinations (founded by the University of Oxford); and 1858: University of Cambridge Local Examinations Syndicate (UCLES, founded by the University of Cambridge)), so keen were actuaries to institute examinations. However it was the massive expansion of the middle classes as the Industrial Revolution disrupted society in so many ways that led to the need for a new sorting hat beyond the capacity of the oral examinations that had previously been the norm.

Now people seem to be lining up to drag everyone back into the examination hall. Any suggestion of a retreat from traditional exams is met by howls of outrage from people like Sam Leith at The Spectator about lack of “rigour”. However, in my view, they are wrong.

Yes of course you can isolate students from every intellectual aid they would normally use, as a centaur, to augment their performance, limit the sources they can access, force them to rely on their own memories entirely, and put them under significant time pressure. You will definitely reduce marks by doing that. So that has made it harder and therefore more rigorous and more objective, right?

Well according to the Merriam-Webster dictionary, rigorous is a synonym of rigid, strict or stringent. However, while all these words mean extremely severe or stern, rigorous implies the imposition of hardship and difficulty. So promoting exams above all as an exercise in rigour reveals their true nature as a kind of punishment beating in written form, for which the prize for undergoing it is whatever it qualifies you for. Suddenly the sorting hat looks relatively less arbitrary.

The problems of traditional exams are well known, but the most important ones in my view are that they measure a limited range of abilities and therefore are unlikely to show what students can really achieve. Harder does not mean more objective. It is like deciding who can act by throwing students out, one at a time, in front of a baying mob of, let’s say for argument, readers of The Spectator. Sure, some of the students might be able to calm the crowd, some may even be able to redirect their anger towards a different target. But are the people who can play Mark Antony for real necessarily the best all-round actors? And has someone who can only stand frozen on the spot under those circumstances really proved that they could never act well?

It also means that education ends a month or more before the exams, to allow the appropriate cramming, followed by engaging all of the teaching staff in the extended exercise of marking, checking and moderating what has been written in answer to academic questions about what the students can remember unaided about the content of their lecture courses and reading lists with a biro on a pad of paper perched precariously on a tiny wooden table surrounded by hundreds of other similar scribblers, for a set period of time as minders wander the floors like Victorian factory owners. But what if instead the assessment was part of the teaching process? What if students felt that their assessment had been a meaningful part of their educational experience? What if, instead of arguing the toss over whether they scored 68% or 70% on an assessment, students could see for themselves whether they had demonstrated mastery of their subject.

One model of assessment which is getting a lot of attention at the moment, one I am a big fan of having used it at the University of Leicester on some modules, is something called interactive oral assessment, where students meet with a lecturer or tutor, individually or in a small group, and answer questions about work they have already submitted. It is a highly demanding form of assessment, for both the students and the assessors, but it means the final assessment is done with the student present and, with careful probing from the assessors, who will obviously need to have done a close reading of the project work beforehand, you can be highly confident of the degree to which the student understands the work they have submitted. It also allows the student to submit a piece of work of more complexity and ambition than can be accommodated by a traditional exam. And it needn’t take any more time if the interviews are carried out online when set against the exam marking time of the traditional exam. Something which all the technology we developed through the pandemic allows us to do, without the need for spyware.

There are other models which also assess the technological centaurs we wish our students to become rather than the reverse-centaurs we are currently dooming too many to become. It is looking like it may be time to start telling students to stop writing and to put down their pens on the traditional exam. And perhaps the actuarial profession, who led us into the era of professional written examinations so enthusiastically 175 years ago, might now want to take the lead in navigating our way out of them?

New (left) and old (right) Naiku shrines during the 60th sengu at Ise Jingu, 1973, via Bock 1974

In his excellent new book, Breakneck, Dan Wang tells the story of the high-speed rail links which started to be constructed in 2008 between San Francisco and Los Angeles and between Beijing and Shanghai respectively. Both routes would be around 800 miles long when finished. The Beijing-Shanghai line opened in 2011 at a cost of $36 billion. To date, California has built only a small stretch of their line, as yet nowhere near either Los Angeles or San Francisco, and the latest estimate of the completed bill is $128 billion. Wang uses this, amongst other examples to draw a distinction between the engineering state of China “building big at breakneck speed” and the lawyerly society of the United States “blocking everything it can, good and bad”.

Europe doesn’t get much of a mention, other than to be described as a “mausoleum”, which sounds rather JD Vance and there is quite a lot about this book that I disagree with strongly, which I will return to. However there is also much to agree with in this book, and none more so than when Wang talks about process knowledge.

Wang tells another story, of Ise Jingu in Japan. Every 20 years exact copies of Naiku, Geku, and 14 other shrines here are built on vacant adjacent sites, after which the old shrines are demolished. Altogether 65 buildings, bridges, fences, and other structures are rebuilt this way. They were first built in 690. In 2033, they will be rebuilt for the 63rd time. The structures are built each time with the original 7th century techniques which involve no nails, just dowels and wood joints. Staff have a 200 year tree planting plan to ensure enough cypress trees are planted to make the surrounding forest self-sufficient. The 20 year intervals between rebuilding are the length of the generations, the older passing on the techniques to the younger.

This, rather like the oral tradition of folk stories and songs, which were passed on by each generation as contemporary narratives until they were all written down and fixed in time so that they quickly appeared old-fashioned thereafter, is an extreme example of process knowledge. What is being preserved is not the Trigger’s Broom of temples at Ise Jingu, but the practical knowledge of how to rebuild them as they were originally built.

Trigger’s Broom. Source: https://www.youtube.com/watch?v=BUl6PooveJE

Process knowledge is the know-how of your experienced workforce that cannot easily be written down. It can develop where such a workforce work closely with researchers and engineers to create feedback loops which can also accelerate innovation. Wang contrasts Shenzhen in China where such a community exists, with Silicon Valley where it doesn’t, forcing the United States to have such technological wonders as the iPhone manufactured in China.

What happens when you don’t have process knowledge? Well one example would be our nuclear industry, where lack of experience of pressurised water reactors has slowed down the development of new power stations and required us to rely considerably on French expertise. There are many other technical skill shortages.

China has recognised the supreme importance of process knowledge as compared to the American concern with intellectual property (IP). IP can of course be bought and sold as a commodity and owned as capital, whereas process knowledge tends to rest within a skilled workforce.

This may then be the path to resilience for the skilled workers of the future in the face of the AI-ification of their professions. Companies are being sold AI systems for many things at the moment, some of which will clearly not work with few enough errors, or without so much “human validation” (a lovely phrase a good friend of mine actively involved in integrating AI systems into his manufacturing processes used recently) that they are not deemed practical. For early career workers entering these fields the demonstration of appropriate process knowledge, or the ability to develop it very quickly, may be the key to surviving the AI roller coaster they face over the next few years. Actionable skills and knowledge which allow them to manage such systems rather than being managed by them. To be a centaur rather than a reverse-centaur.

Not only will such skills make you less likely to lose your job to an AI system, they will also increase your value on the employment market: the harder these skills and knowledge are to acquire, the more valuable they are likely to be. But whereas in the past, in a more static market, merely passing your exams and learning coding might have been enough for an actuarial student for instance, the dynamic situation which sees everything that can be written down disappearing into prompts in some AI system will make such roles unprotected.

Instead it will be the knowledge about how people are likely to respond to what you say in a meeting or write in an email or report, and the skill to strategise around those things, knowing what to do when the rules run out, when situations are genuinely novel, ie putting yourself in someone else’s shoes and being prepared to make judgements. It will be the knowledge about what matters in a body of data, putting the pieces together in meaningful ways, and the skills to make that obvious to your audience. It will be the knowledge about what makes everyone in your team tick and the skills to use that knowledge to motivate them to do their best work. It will ultimately be about maintaining independent thought: the knowledge of why you are where you are and the skill to recognise what you can do for the people around you.

These have not always been seen as entry level skills and knowledge for graduates, but they are increasingly going to need to be as the requirement grows to plug you in further up an organisation if at all as that organisation pursues its diamond strategy or something similar. And alongside all this you will need a continuing professional self-development programme on steroids going on to fully understand the systems you are working with as quickly as possible and then understand them all over again when they get updated, demanding evidence and transparency and maintaining appropriate uncertainty when certainty would be more comfortable for the people around you, so that you can manage these systems into the areas where they can actually add value and out of the areas where they can cause devastation. It will be more challenging than transmitting the knowledge to build a temple out of hay and wood 20 years into the future, and will be continuous. Think of it as the Trigger’s Broom Process of Career Management if you like.

These will be essential roles for our economic future: to save these organisations from both themselves and their very expensive systems. It will be both enthralling and rewarding for those up to the challenge.

The warehouse at the end of Raiders of the Lost Ark

In the year when I was born, Malvina Reynolds recorded a song called Little Boxes when she was a year younger than I am now. If you haven’t heard it before, you can listen to it here. You might want to listen to it while you read the rest of this.

I remember the first time I felt panic during the pandemic. It was a couple of months in, we had been working very hard: to put our teaching processes online, consulting widely about appropriate remote assessments and getting agreement from the Institute and Faculty of Actuaries (IFoA) for our suggested approach at Leicester, checking in with our students, some of who had become very isolated as a result of lockdowns, and a million other things. I was just sitting at my kitchen table and suddenly I felt tears welling up and I was unable to speak without my voice breaking down. It happened at intervals after that, usually during a quiet moment when I, consciously or unconsciously, had a moment to reflect on the enormity of what was going on. I could never point to anything specific that triggered it, but I do know that it has been a permanent change about me, and that my emotions have been very much closer to the surface ever since. I felt something similar again this morning.

What is going on? Well I haven’t been able to answer that satisfactorily until now, but recently I read an article by David Runciman in the LRB from nine years ago when Donald Trump got elected POTUS the first time. I am not sure that everything in the article has withstood the test of time, but in it Runciman makes the case for Trump being the result of the people wanting “Trump to shake up a system that they also expected to shield them from the recklessness of a man like Trump.”. And this part looks prophetic:

[Trump is]…the bluntest of instruments, indiscriminately shaking the foundations with nothing to offer by way of support. Under these conditions, the likeliest response is for the grown-ups in the room to hunker down, waiting for the storm to pass. While they do, politics atrophies and necessary change is put off by the overriding imperative of avoiding systemic collapse. The understandable desire to keep the tanks off the streets and the cashpoints open gets in the way of tackling the long-term threats we face. Fake disruption followed by institutional paralysis, and all the while the real dangers continue to mount. Ultimately, that is how democracy ends.

And it suddenly hit me that this was something I had indeed taken for granted my whole life until the pandemic came along. The only thing that had ever looked like toppling society itself was the prospect of a nuclear war. Otherwise it seemed that our political system was hard to change and impossible to kill.

And then the pandemic came along and we saw government national and local digging mass graves and then filling them in again and setting aside vast arenas for people to die in before quietly closing them again. Rationing of food and other essentials was left to the supermarkets to administer, as were the massive snaking socially-distanced queues around their car parks. Seemingly arbitrary sets of rules suddenly started appearing at intervals about how and when we were allowed to leave the house and what we were allowed to do when out, and also how many people we could have in our houses and where they were allowed to come from. Most businesses were shut and their employees put on the government’s payroll. We learned which of us were key workers and spent a lot of time worrying about how we could protect the NHS, who we clapped every Thursday. It was hard to maintain the illusion that society still provided solid ground under our feet, particularly if we didn’t have jobs which could be moved online. Whoever you were you had to look down at some point, and I think now that I was having my Wile E. Coyote moment.

The trouble is, once you have looked down, it is hard to put that back in a box. At least I thought so, although there seems to have been a lot of putting things in boxes going on over the last few years. The UK Covid-19 Inquiry has made itself available online via a YouTube channel, but you might have thought that a Today at the Inquiry slot on terrestrial TV would have been more appropriate, not just covering it when famous people are attending. What we do know is that Patrick Vallance, Chief Scientific Advisor throughout the pandemic, has said that another pandemic is “absolutely inevitable” and that “we are not ready yet” for such an eventuality. Instead we have been busily shutting that particular box.

The biggest box of course is climate change. We have created a really big box for that called the IPCC. As the climate conferences migrate to ever more unapologetic petro-states, protestors are criminalised and imprisoned and emissions continue to rise, the box for this is doing a lot of work.

And then there are all the NHS boxes. As Roy Lilley notes:

If inquiries worked, we’d have the safest healthcare system in the world. Instead, we have a system addicted to investigating itself and forgetting the answers.

But perhaps the days of the box are numbered. The box Keir Starmer constructed to contain the anger about grooming gangs which the previous 7 year long box had been unable to completely envelop also now appears to be on the edge of collapse. And the Prime Minister himself was the one expressing outrage when a perfectly normal British box, versions of which had been giving authority to policing decisions since at least the Local Government (Review of Decisions) Act 2015 (although the original push to develop such systems stemmed from the Hillsborough and Heysel disasters in 1989 and 1985 respectively) suddenly didn’t make the decision he was obviously expecting. That box now appears to be heading for recycling if Reform UK come to power, which is, of course, rather difficult to do in Birmingham at the moment.

But what is the alternative to the boxes? At the moment it does not look like it involves confronting our problems any more directly. As Runciman reflected on the second Trump inauguration:

Poor Obama had to sit there on Monday and witness the mistaking of absolutism for principle and spectacle for politics. I don’t think Trump mistakes them – he doesn’t care enough to mind what passes for what. But the people in the audience who got up and applauded throughout his speech – as Biden and Harris and the Clintons and the Bushes remained glumly in their seats – have mistaken them. They think they will reap the rewards of what follows. But they will also pay the price.

David Allen Green’s recent post on BlueSky appears to summarise our position relative to that of the United States very well:

To Generation Z: a message of support from a Boomer

So you’ve worked your way through school and now university, developing the skills you were told would always be in high demand, credentialising yourself as a protection against the vagaries of the global economy. You may have serious doubts about ever being able to afford a house of your own, particularly if your area of work is very concentrated in London…

…and you resent the additional tax that your generation pays to support higher education:

Source: https://taxpolicy.org.uk/2023/09/24/70percent/

But you still had belief in being able to operate successfully within the graduate market.

A rational functional graduate job market should be assessing your skills and competencies against the desired attributes of those currently performing the role and making selections accordingly. That is a system both the companies and graduates can plan for.

It is very different from a Rush. The first phenomenon known as a Rush was the Californian Gold Rush of 1848-55. However the capitalist phenomenon of transforming an area to facilitate intensive production probably dates from sugar production in Madeira in the 15th century. There have been many since, but all neatly described by this Punch cartoon from 1849:

A Rush is a big deal. The Californian Gold Rush resulted in the creation of California, now the 5th largest economy in the world. But when it comes to employment, a Rush is not like an orderly jobs market. As Carlo Iacono describes, in an excellent article on the characteristics of the current AI Rush:

The railway mania of the 1840s bankrupted thousands of investors and destroyed hundreds of companies. It also left Britain with a national rail network that powered a century of industrial dominance. The fibre-optic boom of the late 1990s wiped out about $5 trillion in market value across the broader dot-com crash. It also wired the world for the internet age.

A Rush is a difficult and unpredictable place to build a career, with a lot riding on dumb luck as much as any personal characteristics you might have. There is very little you can count on in a Rush. This one is even less predictable because as Carlo also points out:

When the railway bubble burst in the 1840s, the steel tracks remained. When the fibre-optic bubble burst in 2001, the “dark fibre” buried in the ground was still there, ready to carry traffic for decades. These crashes were painful, but they left behind durable infrastructure that society could repurpose.

Whereas the 40–60% of US real GDP growth in the first half of 2025 explained by investment in AI infrastructure isn’t like that:

The core assets are GPUs with short economic half-lives: in practice, they’re depreciated over ~3–5 years, and architectures are turning over faster (Hopper to Blackwell in roughly two years). Data centres filled with current-generation chips aren’t valuable, salvageable infrastructure when the bubble bursts. They’re warehouses full of rapidly depreciating silicon.

So today’s graduates are certainly going to need resilience, but that’s just what their future employers are requiring of them. They also need to build their own support structures which are going to see them through the massive disruption which is coming whether or not the enormous bet on AI is successful or not. The battle to be centaurs, rather than reverse-centaurs, as I set out in my last post (or as Carlo Iacono describes beautifully in his discussion of the legacy of the Luddites here), requires these alliances. To stop thinking of yourselves as being in competition with each other and start thinking of yourselves as being in competition for resources with my generation.

I remember when I first realised my generation (late Boomer, just before Generation X) was now making the weather. I had just sat a 304 Pensions and Other Benefits actuarial exam in London (now SP4 – unsuccessfully as it turned out), and nipped in to a matinee of Sam Mendes’ American Beauty and watched the plastic bag scene. I was 37 at the time.

My feeling is that despite our increasingly strident efforts to do so, our generation is now deservedly losing power and is trying to hang on by making reverse centaurs of your generation as a last ditch attempt to remain in control. It is like the scene in another movie, Triangle of Sadness, where the elite are swept onto a desert island and expect the servant who is the only one with survival skills in such an environment to carry on being their servant.

Don’t fall for it. My advice to young professionals is pretty much the same as it was to actuarial students last year on the launch of chartered actuary status:

If you are planning to join a profession to make a positive difference in the world, and that is in my view the best reason to do so, then you are going to have to shake a few things up along the way.

Perhaps there is a type of business you think the world is crying out for but it doesn’t know it yet because it doesn’t exist. Start one.

Perhaps there is an obvious skill set to run alongside your professional one which most of your fellow professionals haven’t realised would turbo-charge the effectiveness of both. Acquire it.

Perhaps your company has a client who noone has taken the time to put themselves in their shoes and communicate in a way they will properly understand and value. Be that person.

Or perhaps there are existing businesses who are struggling to manage their way in changing markets and need someone who can make sense of the data which is telling them this. Be that person.

All why remaining grounded in which ever community you have chosen for yourself. Be the member of your organisation or community who makes it better by being there.

None of these are reverse centaur positions. Don’t settle for anything less. This is your time.