I have caught Covid for the third time this week, so naturally my thoughts have turned to how it all began.
There are a few Covid posts starting to turn up online as the 6th anniversary of it all rumbles around. The British Foreign Policy Group have helpfully published a timeline from which I have taken everything that happened before Boris Johnson locked us down for the first time:
So a lot had happened by 23 March. You will all have your favourite bits from the saga above, I think mine is 22 January, when Public Health England announced they had moved the risk level to the general public from very low to low.
I remember teaching a macroeconomics class on 12 March when we knew it was going to be the last session on campus. The penny hadn’t dropped. Students were asking about how they would hand work in. We agreed it would have to be online. Some lecturers were talking about microwaving paper submissions to sterilise them. We had a little giggle about that. I had spoken to Stuart McDonald (now MBE) earlier that day where we had reluctantly agreed to postpone his visit to campus to speak to the Leicester Actuarial Science Society (LASS). Stuart would of course become one of the actuarial stars of the pandemic for his work with the COVID-19 Actuaries Response Group. I had a similar conversation by email with Lord Willetts, who was Chancellor at the University of Leicester at the time and who was going to talk to LASS about his books The Pinch and A University Education. We talked of postponing rather than cancelling. The realisation that everything was changing for the foreseeable future was still not there.
It took a long time for the penny to drop for the Government as well. As this analysis of the establishment of the “Covid Disinformation Ecosystem” says:
January featured fear and disbelief, February proved covid couldn’t simply be ignored, March was when governments realised the hospitalisation rate could overwhelm healthcare.
And a Government that was slow to respond initially was very vulnerable to the groups which sprung up during 2020 and 2021. As the Counter Disinformation Project says:
And the main initial target for the UK section of the ecosystem was Boris Johnson who was meeting privately with newspaper owners and editors. Enough doubt was put into Johnson’s mind that he dithered and delayed when cases began to rise, leading to a private meeting with Heneghan, Gupta and Sweden’s Anders Tegnell in September before he chose to ignore his scientific advisors’ calls for a circuit breaker lockdown. In the run up to the deadliest weeks of the pandemic the papers were calling for Johnson to “Save Christmas’.
However I don’t want to focus on our collective inability to make decisions during crises this time. This time I want to focus on the impact of the pandemic on our mental health.
By coincidence, today the 386 page Module 3 report from the Covid Inquiry on The impact of the Covid-19 pandemic on the healthcare systems of the United Kingdom was published. The longer this Inquiry goes on, the more it appears to resemble a truth and reconciliation commission rather than something likely to improve the handling of future pandemics. It gets past transgressions on the record, but in a way designed to move us on rather than improve our preparedness and organisation. I certainly saw nothing in the summaries that I didn’t already know. Module 3 has made 10 recommendations. The only one which mentions mental health at all is the last one on Psychological and emotional support for healthcare workers.
Looking through the module titles, it would seem that this is unlikely to be rectified until Module 10 – Impact on society – reports, currently scheduled for the first half of 2027. I find this relegation of our collective trauma to the lowest priority astonishing.
Data on the prevalence of mental health difficulties is harder to assess. For children and young people, surveys in England have provided a time series since 2020 that suggests very strongly that mental ill health is indeed more prevalent now than it was before the start of the pandemic. A steady rise in the decade prior to 2020 seems to have been followed by a sharp rise, and numbers have stayed high ever since. We do not have the equivalent data for adults, meaning that a clear picture has yet to emerge, but there is persuasive evidence that levels of mental ill health have been rising over the last decade, and the pandemic has contributed to many of the risk factors people face.
Before concluding as follows:
Crucially, the pandemic exposed fault-lines in the nation’s mental health, and the stark inequalities faced every day by people living with mental illness. The public’s mental health was deteriorating in the years running up to the pandemic, and mental health services were struggling to deal with the consequences of many years of underfunding and austerity measures across public services. People with a mental illness were already dying 15-20 years sooner than the general population, and facing widespread hardship. The pandemic exacerbated these inequalities, creating new risks to people’s mental health and reducing access to support.
We now have the opportunity to learn from this experience and build a mentally healthier future. We can act now to boost the public’s mental health in the aftermath of the pandemic, protecting those who have experienced the worst effects and offering better support to groups that don’t yet have access to the right support. And we can incorporate mental health into preparations for future emergencies, so that responses are psychologically informed from day one.
They also made 10 recommendations, mostly for the NHS and Department for Health and Care, but also covering education, communications and considerations for the upcoming (at the time) review of the Mental Health Act. Less than half of these recommendations have been addressed at all.
Now we are two years on from that report, what has changed?
Well, Roy Lilley has drawn a rather dispiriting picture for us. He draws attention to Wes Streeting’s announcement in the Health Service Journal on 12 March, that the proportion of the NHS budget spent on mental healthcare would be cut for the third year in a row. Lilley lists how the demands on mental health services have mushroomed since before the pandemic:
Around two million people were in touch with mental health services in 2019, today it’s around three million;
Child and Adolescent Services: in 2019 around 500,000 referrals. Now around a million;
And only around 45% of referrals are accepted, meaning the true demand is even higher;
Talking therapies are up by 60%; and
Crisis team referrals and sectioning under the Mental Health Act are also up 60%.
And he summarises the problem like this:
The total economic cost of mental ill-health in England in 2022 was estimated ~£300bn a year when lost productivity, welfare and wider costs are factored in.
The total MH budget is about £16bn. Meaning, the NHS is spending roughly £1 trying to address a £18 national problem.
It feels like we are still waiting for the penny to drop.
A week or so ago I referred to a “Thought Exercise” set in June 2028 “detailing the progression and fallout of the Global Intelligence Crisis” (ie science fiction), published on 23 February, which may have tanked the share price of IBM later that day. As I said then, the fall definitely happened, with IBM’s share price falling 13%, its biggest fall since 2000. I said then that the likelihood of the scenario portrayed was difficult to assess, but the speed with which the total economic collapse was described felt unlikely if not impossible. I would like to expand on that.
The main reason that the scenario was hard to assess was that it was not based on data or evidence at all. That is unavoidable for speculative fiction talking about things that are not currently happening, but when describing an economy only two years away where most of the processes described should be discernible to some extent already, it is totally avoidable.
Ed Zitron has done an excellent line by line take down of the Citrini piece here. Here is one page of that to give you a flavour:
However this lack of a link with anything tangible did not stop the financial markets panicking, which should cause us pause when relying on the financial markets’ valuation of projects, industries, government policies, etc.
Ed Zitron describes this kind of piece as analyslop: “when somebody writes a long, specious piece of writing with few facts or actual statements with the intention of it being read as thorough analysis”. It can then get picked up by other commentators which take it as their starting point for further analysis, often making it hard to see that the starting point had few if any data points. Here is an example, from Carlo Iacono, looking at what if just some of the Citrini pronouncements were true, with appendices detailing possible branching paths of outcomes, all generated by a large language model (LLM). And then people start studying the meta analysis, and it starts getting taken even more seriously, and put into models and pretty soon most of the analysis is being done on imagined risks rather than on ones which are already staring us in the face.
We have always had a problem keeping our society grounded in reality, think the 2003 Iraq War, where we went to war on a false assessment about Iraq’s possession of weapons of mass destruction, the 2008 financial crisis, where banks misunderstood the risks they were exposed to, and the last two and a half years, where we, for the most part, seem to have convinced ourselves we have not been facilitating a genocide in Gaza when we clearly have been. But this is only going to get worse with the AI systems which are being developed.
The rapid rise of artificial intelligence has served to dramatically increase the speed of information production while also eroding accuracy, making it difficult to differentiate between content that simply sounds confident and content that’s actually grounded in reality.
So where is AI currently? Well PwC’s global CEO survey from January this year had the following statement as the first bullet amongst its key findings:
Most CEOs say their companies aren’t yet seeing a financial return from investments in AI. Although close to a third (30%) report increased revenue from AI in the last 12 months and a quarter (26%) are seeing lower costs, more than half (56%) say they’ve realised neither revenue nor cost benefits.
That’s the reality. But the hype is much much more entertaining. My favourite spoof video of the AI future currently is this one, about the time where all most of us are good for is riding bicycles to supply the ever increasing energy needs of AI systems (click view in browser if you can’t see it):
And what about the financial journalists? The pieces describing our reaction to whatever is about to unfold economically have already been written. There are investor websites asking if the 2026 crash has already begun, while another recent article argues that “America has quietly become one of the world’s most shock‑resistant economies” (which seems unlikely to age well). What most financial journalists are more comfortable with are articles about how the warnings were ignored after the fact.
And the professions? Well the current overview of my own profession is probably reasonably represented by this piece from the Society of Actuaries in the United States. Unfortunately for them, Daniel Susskind, who is mentioned in the article, is currently suggesting, as part of his Future of Work lecture series for Gresham College, how the key to the sudden development in AI, after the “AI Winter” when progress seemed slow, was that we abandoned trying to make machines which thought and acted like humans in favour of focusing on completing tasks in any way possible. Increasingly we are now automating tasks where we can’t (or won’t) articulate how we do them. From Deep Blue‘s victory over Kasparov in 1997 to Watson winning jeopardy in 2011 to ImageNet beating humans at image recognition (although that is disputed), Susskind refers to this progress as the displacement of purists in favour of what he calls “The Pragmatic Revolution”. Pragmatism in this sense appears to be that we humans should just accept the consequences the people running these systems want. So, as his latest lecture “Work, out of reach” claims, people moving into cities to find work is a strategy which is no longer going to work for low skilled people:
He then shows this graphic demonstrating the lack of recovery of big coal mining areas in the UK:
Source: Left – Sheffield Hallam University map of coal mining areas; Right – % employment from Overman and Xu (2022)
And finally he cites the notorious Policy Exchange piece from 2007, Cities Unlimited, whose thesis was that there is apparently no realistic prospect of regenerating towns and cities outside London and the South East.
Susskind talks about three forms of technological unemployment:
skills-mismatch, where your skills are mismatched to the work available. Education and training has always been the answer to this in the past.
place-mismatch, where the jobs are not where you have built your life. Some believe the answer should always be the one proposed by Norman Tebbit, who memorably told everyone in 1981, “I grew up in the 30s with an unemployed father. He did not riot. He got on his bike and looked for work.”
identity-mismatch, where according to Susskind, people are prepared to stay out of work to protect their identity, citing US men who won’t take “pink collar” work, China “rotten tail” kids, Japanese seishain-or-nothing and Indian Sarkari Naukri queues in India. Or perhaps they are just looking for work which is consistent with the idea of human dignity.
Susskind claims to have no answer to any of these as far as AI is concerned. They are, in his view, just the inevitable outcomes of his “Pragmatic Revolution”. It is the unthinking pursuit of more and more growth funded by capital less and less tethered to any territory, principle or purpose, where any grit in the machinery, be it unions or protestors or, increasingly, the wrong sort of government must be trampled underfoot. Anything which impedes the helter-skelter rush to more and more at greater and greater speed. It’s like our whole economy is run by this guy (press the view in browser link if you can’t see him) shouting “Ready, Aim, Fire!”:
But unskilled people will not be the only collateral damage of these unguided weapons. Take markets for instance. These are where people are exposed to risks and rewards based on underlying conditions they only partially understand. Greed and fear may be their main motivations, but gossip and group think are their main communication channels. They don’t need facts, particularly when so many of the facts are proprietary information not in the public domain. A plausible narrative will do. And plausible narratives are what LLMs will do for you in abundance.
And the more we reward people who can move fast, eg to spot an arbitrage opportunity, even at the risk of breaking things, rather than people who can make decisions which still look good decades from now, the more we are setting up the conditions for AI systems to be the go-to tool.
And put that together with an AI industry which desperately needs funding capital to keep arriving, ie one which is unbelievably highly motivated to push plausible narratives even when they know they are not grounded in reality, and you have a recipe for market-generated chaos.
And then we have Trump’s new war. Beware the people who are war gaming the Middle East at the moment on a range of LLMs (just stop and think for a moment about the bloodless inhuman impulse behind carrying out such an exercise rather than, I don’t know, talking to some actual people who live or have lived recently in and around the region). One of the worst offenders is Heavy Lifting banging on about what the three scenarios are for Operation Epic Fury. This is as bad as it sounds:
I tasked her [he is talking about Gemini Pro here] with doing a literature review on regime change (a term often used by the President but not a well-defined one), creating three scenarios of possible outcomes for which each was give a percentage probability, and a list of 20 items to examine for each scenario that covered political, economic, and cultural issues with a special focus on the political consequences in the U.S. and what this means for China, our biggest geopolitical rival.
But Gemini Pro wasn’t the only one involved in this. Two other humans were, Tim Parker and Ron Portante, trainers at the gym I go to. (Just as a personal aside, Tim was my coach in hitting six plates [345 pounds] on the sled last Friday and I have a video to prove it!) I was talking about the piece and Ron raised the issue of linguistic and cultural diversity in Iran. Tim did some real time research for me on his phone while I was burning real calories under his strict tutelage. This made me think I needed a background section on Iran. When I got him Gemini and I added it.
What you mean you belatedly realised you might need to have done some actual research into Iran rather than just generic research on regime change? I stopped reading at that point.
Meanwhile King’s College London have been carrying out war games more systematically using AI. Professor Kenneth Payne from the Department of Defence Studies led the study, which looked at how LLMs would perform in simulated nuclear crises. As Professor Payne said:
Nuclear escalation was near-universal: 95% of games saw tactical nuclear use and 76% reached strategic nuclear threats. Claude and Gemini especially treated nuclear weapons as legitimate strategic options, not moral thresholds, typically discussing nuclear use in purely instrumental terms. GPT-5.2 was a partial exception, limiting strikes to military targets, avoiding population centers, or framing escalation as “controlled” and “one-time.” This suggests some internalised norm against unrestricted nuclear war, even if not the visceral taboo that has held among human decision-makers since 1945.
This is not a Pragmatic Revolution. These AI systems cannot replace humans thinking about the future we want for humans in any way which is worth having. What they can do, if we let them, is accelerate our worst impulses and move us further away from considered reflective decision making.
But we will continue to use AI systems in the military because, as it turns out, it is very useful for low stakes admin. So although Lavender, the system used by the Israeli military to select targets in Gaza, made errors in 10% of cases and was therefore totally inappropriate to the task, there are lots of organisational logistical tasks where it is much quicker than the alternative and 10% error rates do not matter so much.
There is clearly an issue with what we decide to use these systems for. We need to be able to regulate the decisions which are particularly consequential. However the only way we seem to be considering for this at the moment is the human-in-the-loop model, like the humans spending around 20 seconds considering each target recommended by Lavender before authorizing a bombing. I have written about these before in the context of early career professionals in the finance industry, where the prospect seemed miserable enough:
They will be paid a lot more. However, as Cory Doctorow describes here, the misery of being the human in the loop for an AI system designed to produce output where errors are hard to spot and therefore to stop (Doctorow calls them, “reverse centaurs”, ie humans have become the horse part) includes being the ready made scapegoat (or “moral crumple zone” or “accountability sink“) for when they are inevitably used to overreach what they are programmed for and produce something terrible.
However it seems obvious to me that, in the context of dropping actual bombs on actual people, there is an even more serious problem with this model. As Simon Pearson (anti-capitalist musings) puts it:
The “human in the loop” requirement exists in military doctrine because international humanitarian law demands an accountable human decision-maker for lethal force. The laws of armed conflict require proportionality assessments, precautionary measures, distinction between combatants and civilians. All of these obligations attach to a human commander. The system cannot fulfil them. So a human must be present, and their presence must constitute a decision, regardless of whether any genuine decision was made.
What the institution needs from the analyst is not judgment. It is a signature. The signature converts a machine output into a human act. And a human act is what the law recognises, whether or not any judgment occurred. When the strike kills children, the chain of accountability runs to the analyst who approved the target: not to the system that identified it, not to the company that built the system, not to the doctrine that compressed the review window to ten seconds.
But whether we want to make money from exploiting a short term anomaly in a market, make our fellow humans redundant, prosecute a war on another group of fellow humans or “win” a war of mutual nuclear destruction, we need to retain the capacity for real human reflection within the decision-making processes we use. Not just a human-in-the-loop nor just the elites of tech companies deciding how the systems will be configured behind commercially confidential walls. These processes need democratic accountability every bit as much as our parliaments, councils, institutions and voting systems do.
Something infuriatingly slow, inclusive and deliberative giving recommendations which are then stress-tested for how they would perform on contact with reality, involving yet more people being serious and deliberative and taking their responsibilties more seriously than being a human-in-the-loop would ever allow. Our decision-making systems need more grit and less oil. AI is all oil.
The Actuary magazine recently had a debate about whether the underlying data or the story you wove around it was more important. I’m not sure there is always a clear distinction between the two, as Dan Davies rather neatly illustrates here, but my view is that, if a binary choice has to be made, it is always going to be the story. And there was a great example of this which popped up recently in the FT.
The FT article was ‘Is university still worth it?’ is the wrong question, by John Burn-Murdoch, with great graphs as usual by John. However, as is sometimes the case, I feel that a very different and more convincing story could be wrapped around the same datasets he is showing us.
The article’s thesis is as follows:
The graduate earnings premium, ie how much more on average graduates earn than non-graduates, has only fallen in the UK as the proportion going to university has risen. It has risen in other countries:
In the UK, we have had much weaker productivity growth than the other comparator countries, and also “the steady ramping up of the minimum wage has squeezed the earnings premium from the lower end too”:
We have also had a much smaller increase in the percentage of managerial and professional jobs than a different group of comparator countries (they haven’t mentioned Germany before), meaning graduates are forced to take lower salaried jobs elsewhere:
So the answer according to the FT? We should focus on economic growth rather than “tweaking” higher education intake and funding. Then graduate earnings would be higher, student loans could be more generous(!) and students would have more chance of getting a good job.
Well perhaps. But here’s a different framing of the same data that I find more persuasive.
Let’s start by addressing that point about the minimum wage. According to the House of Commons Library report on this, the UK’s minimum wage is broadly comparable to that of France and the Netherlands, although higher than Canada’s and much higher than that of the United States. The employers who are the FT’s constituency would obviously like us lower down this particular chart:
The main economic framing here is the progress myth of the UK’s business community: economic growth. All problems can be solved if we can just get more economic growth. Apparently we need more inequality in pay between graduates and non-graduates which we can get by generating more economic growth. This is honest of them at least, although I don’t see much evidence that the economic growth they crave will go into skilled job creation rather than stock buy backs (according to Motley Fool, “Companies spent $249 billion on stock buybacks in Q3 2025, and $777 billion over the first three quarters of 2025.”).
There are a lot of problems with framing every economic question with respect to economic growth, memorably illustrated by Zack Polanski of the Green Party in this less than 3 minute video recently (I strongly recommend you watch it before you read on – click on the read in browser link if you can’t see it):
Economic growth is increasingly without purpose, wasteful of energy and poorly distributed. It is chasing outputs, literally any outputs, whatever the cost to the environment, our health system, our education system, our social support systems and our communities. Looking at the framing above, you can see that economic growth as currently pursued will always see anything which stops the concentration of wealth amongst the already wealthy, like a higher national minimum wage or a totally made-up concept like a lower graduate earnings premium (which in itself is a framing trying to make reducing inequality seem undesirable) as a problem. Lack of productivity growth, itself a proxy for this kind of economic growth (because if you ask why we need more productivity the answer is always to get more economic growth), is usually directed as a criticism at “lazy” UK workers, rather than under-investing and over-extracting UK business owners.
But what if, instead of economic growth, your progress myth was reducing inequality? Or growing equality within the economy?
Source: World Inequality Database wid.world
If you focused on inequality rather than economic growth, then you would find it correlates with everything we say we don’t want. Unlike economic growth, having equality as an aim actually has the advantage of having an evidence base for the claim that it improves society:
If you focused on inequality, then you would be pleased that we have had an increase in our minimum wage. You would think that the same FT article’s admission that UK graduates’ skills levels are higher than those in the United States was more important than something called a graduate earnings premium.
Burn-Murdoch is right to say asking whether university is worth it is the wrong question.
However economic growth is the wrong answer.
And I thought I would probably be stopping there for this week. But then something odd happened. A “Thought Exercise” set in June 2028 “detailing the progression and fallout of the Global Intelligence Crisis” (ie science fiction), published on 23 February, may have tanked the share price of IBM later that day. The fall definitely happened, with IBM’s share price falling 13%, its biggest fall since 2000, alongside smaller falls in other tech stocks.
Investors have recently seized on social media rumours and incremental developments by small AI companies to justify further selling, with a widely circulated blog post by Citrini Research over the weekend describing how AI could hypothetically push the US unemployment rate above 10 per cent by 2028, proving the latest catalyst.
The likelihood of the scenario portrayed is difficult to assess, but the speed with which the total economic collapse happens subsequently as described feels unlikely if not impossible. However the fact that the markets are this jittery tells us something I think. As Carlo Iacono puts it:
We are living through a period in which the gap between “plausible narrative” and “tradeable signal” has collapsed to nearly nothing. When a scenario feels real enough to model, and the underlying anxiety is already there waiting to be organised, fiction and forecast become functionally indistinguishable.
The data underlying the markets hasn’t changed, but the story has. I rest my case.
I have spent many days in rooms with groups of men (always men) anxious about their future income, where I advised them on how much to ask their companies for. Most of my clients as a scheme actuary were trustees of pension schemes of companies which had seen better days, and who were struggling to make the necessary payments to secure the benefits already promised, let alone those to come. One by one, those schemes stopped offering those future benefits and just concentrated on meeting the bill for benefits already promised. If an opportunity came to buy those benefits out with an insurance company (which normally cost quite a bit more than the kind of “technical provisions” target the Pensions Regulator would accept), I lobbied hard to get it to happen. In many cases we were too late though, the company went bust and we moved it into the Pension Protection Fund instead. That was the life of a pensions actuary in the West Midlands in the noughties. I was often “Mr Good News” in those meetings, the ironic reference to the man constantly moving the goalposts for how much money the scheme needed to meet those benefits bills. I saw my role as pushing the companies to buy out funding if at all possible. None of the schemes I advised had a company behind them which could sustain ongoing pension costs long term. I would listen to the wishful thinking and the corporate optimism, smile and push for the “realistic” option of working towards buy out.
Then I went to work at a university, and found myself, for the first time since 2003, a member of an open defined benefit pension scheme. It was (and still is) a generous scheme, but was constantly complained about by the university lecturers who comprised most of its membership. I didn’t see any way that it was affordable for employers which seemed to struggle to employ enough lecturers, were very reluctant to award anything other than fixed term contracts, and had an almost feudal relationship with their PhD students and post docs. Staff went on strike about plans to close the scheme to future accrual and replace it with the most generous money purchase scheme I had ever seen. I demurred and wrote an article called Why I Won’t Strike. I watched in wonder when even actuarial lecturers at other universities enthusiastically supported the strike. However, over 10 years later, that scheme – the UK’s biggest – is still open. And I gained personally from continued active membership until 2024.
Now don’t get me wrong, I still think the UK university sector is wrong to maintain, unique amongst its peers, a defined benefit scheme. The funding requirement for it has been inflated by continued accrual over the last 8 years and therefore so has the risk it will spike at just the time when it is least affordable, a time which may soon be approaching with 45% of universities already reporting deficits. However the strike demonstrated how important the pension scheme was to staff, something the constant grumbling before the strike had led university managers to doubt. And, once the decision had been made to keep the scheme open to future accrual, I had no more to add as an actuary. Other actuaries had the responsibility for advising on funding, in fact quite a lot of others as the UCU was getting its own actuarial advice alongside that the USS was getting, but my involvement was now just that of a member, just one with a heightened awareness of the risks the employers were taking.
The reason I bring this up is because I detected something of the same position as my lonely one from the noughties amongst the group of actuaries involved in the latest joint report from the Institute and Faculty of Actuaries and the University of Exeter about the fight to maintain planetary climate solvency.
It very neatly sets out the problem, that the whole system of climate modelling and policy recommendations to date has been almost certainly underestimating how much warming is likely to result from a given increase in the level of carbon dioxide in the atmosphere. Therefore all the “carbon budgets” (amount we can emit before we hit particular temperature levels) have been assumed to be higher than they actually are and estimates for when we exhaust them have given us longer than we actually have. This is due to the masking effects of particulate pollution in the air, which has resulted in around 0.5C less warming than we would otherwise have had by now. However, efforts to remove sulphur from oil and coal fuels (themselves important for human health) have acted to reduce this aerosol cooling effect. The goalposts have moved.
An additional reference I would add to the excellent references in the report is Hansen’s Seeing the Forest for the Trees, which concisely summarises all the evidence to suggest the generally accepted range for climate sensitivity is too low.
So far, so “Mr Good News”. And for those who say this is not something actuaries should be doing because they are not climate experts, this is exactly what actuaries have always done. We started the profession by advising on the intersection between money and mortality, despite not being experts in any of the conditions which affected either the buying power of money or the conditions which affected people’s mortality. We could however use statistics to indicate how things were likely to go in general, and early instances of governments wasting quite a lot of money without a steer from people who understood statistics got us that gig, and a succession of other related gigs over the years ahead.
The difficult bit is always deciding what course of action you want to encourage once you have done the analysis. This was much easier in pensions, as there was a regulatory framework to work to. It is much harder when, as in this case, it involves proposing changes in behaviour which are ingrained into our societies. If university lecturers can oppose something that is clearly not in the long term financial interests of their employers and push for something which makes their individual employers less secure, then how much more will the general public resist change when they can see no good reason for it.
And in this regard this feels like a report mostly focused on the finance industry. The analogies it makes with the 2008 financial crash, constant comparisons with the solvency regulatory regimes of insurers in particular and even the framing of the need to mitigate climate change in order to support economic growth are all couched in terms familiar to people working in the finance sector. This has, perhaps predictably, meant that the press coverage to date has mostly been concentrated in the pension, insurance and investment areas:
However in the case of the 2008 crash, the causes were able to be addressed by restricting practices amongst the financial institutions which had just been bailed out and were therefore in no position to argue. Many of those restrictions have been loosened since, and I think many amongst the general public would question whether the decision to bail out the banks and impose austerity on everyone else is really a model to follow for other crises.
The next stage will therefore need to involve breaking out of the finance sector to communicate the message more widely, perhaps focusing on the first point in the proposed Recovery Plan: developing a different mindset. As the report says:
This challenge demands a shift in perspective, recognising that humanity is not separate from nature but embedded in it, reliant on it and, furthermore, now required to actively steward the Earth system. To maintain Planetary Solvency, we need to put in place mechanisms to ensure our social, economic, and political systems respect the planet’s biophysical limits, thus preserving or restoring sufficient natural capital for future generations to continue receiving ecosystem services…
…The prevailing economic system is a risk driver and requires reform, as economic dependency on nature is unrecognised in dominant economic theory which incorrectly assumes that natural capital is substitutable by manufactured capital. A particular barrier to climate action has been lobbying from incumbents and misinformation which has contributed to slower than required policy implementation.
By which I assume they mean this type of lobbying:
And this is where it gets very difficult, because actuaries really do not have anything to add at this point. We are just citizens with no particular expertise about how to proceed, just a heightened awareness of the dangers we are facing if we don’t act.
But we can also, as the report does, point out that we still have agency:
Although this is daunting, it means we have agency – we can choose to manage human activity to minimise the risk of societal disruption from the loss of critical support services from nature.
This point chimes with something else I have been reading recently (and which I will be writing more about in the coming weeks): Samuel Miller McDonald’s Progress. As he says “never before have so many lives, human and otherwise, depended on the decisions of human beings in this moment of history”. You may argue the toss on that with me, which is fine, but, in view of the other things you may be scrolling through either side of reading this, how about this for a paragraph putting the whole question of when to change how we do things in context:
We are caught in a difficult trap. If everything that is familiar is torn down and all the structures that govern our day-to-day disintegrated, we risk terrible disorder. We court famines and wars. We invite power vacuums to be filled by even more brutal psychopaths than those who haunt the halls of power now. But if we don’t, if we continue on the current path and simply follow inertia, there is a good chance that the outcome will be far worse than the disruption of upending everything today. Maintaining status-quo trajectories in carbon emissions, habitat destruction and pollution, there is a high likelihood of collapse in the existing structure anyway. It will just occur under far worse ecological conditions than if it were to happen sooner, in a more controlled way. At least, that is what all the best science suggests. To believe otherwise requires rejecting science and knowledge itself, which some find to be a worthwhile trade-off. But reality can only be denied for so long. Dream at night we may, the day will ensnare us anyway.
One thing I never did in one of those rooms full of anxious men was to stand up and loudly denounce the pensions system we were all working within. Actuaries do not behave like that generally. However we have a senior group of actuaries, with the endorsement of their profession, publishing a report that says things like this (bold emphasis added by me):
Planetary Solvency is threatened and a recovery plan is needed: a fundamental, policy-led change of direction, informed by realistic risk assessments that recognise our current market-led approach is failing, accompanied by an action plan that considers broad, radical and effective options.
This is not a normal situation. We should act accordingly.
Happy new year all! New year, new banner, courtesy of my brilliant daughter who presented me with a plausible 3-D model of my very primitive cartoon of a reverse-centaur over Christmas. And I thought I would kick off with a relatively uncontentious subject: examinations!
“Back to normal!” That was the cry throughout education when the pandemic had finally ended enough for us to start cramming students into rooms again. The universities had all leveraged themselves to the maximum, and perhaps beyond, to add to the built estate, so as to entice students in both the overseas and the uncapped domestic market to their campuses, and one by-product of this was they had plenty of potential examination halls. So let’s get away from all of that electronic remote nonsense and get everyone in a room together where you can keep an eye on them and stop them cheating. This united the purists who yearned for the days of 10% of the cohort turning up for elite education via chalk and talk rather than the 50% we have today, senior management needing to justify the size of the built estate and politicians who kept referring to traditional exams in an exam hall as the “gold standard”.
So, in a time when students have access to information, tools, how to videos of everything imaginable, the entire output of the greatest minds of thousands of years of human history, as well as many of the less than great minds, in short anything which has ever caught anyone’s attention and been committed to some form of media: in this of all times, we want to sort the students into categories for the existing job market based on how they answer academic questions about what they can remember unaided about the content of their lecture courses and reading lists with a biro on a pad of paper perched precariously on a tiny wooden table surrounded by hundreds of other similar scribblers, for a set period of time as minders wander the floors like Victorian factory owners.
And for institutions that thought the technology we fast-tracked for education delivery and assessment in the pandemic would surely be part of education’s future? Or perhaps they just can’t afford to borrow half a billion or have the the land available to construct more cathedrals of glass and brick to house more examination halls? Simple! We just create the conditions for that gold standard examination right there in the student’s own bedroom or the company they work for!
There are 54 pages to the Institute and Faculty of Actuaries’ (IFoA’s) guidance for remotely invigilated candidates. It covers everything from the minimum specification of equipment you need, including the video camera to watch your every movement and the microphone to pick up every sound you make, to the proprietary spying software (called “Guardian Browser”) you will need to download onto your own computer, how to prove who you are to the system, what you are allowed to have in your bedroom with you and even how you need to sit for the duration of the exam (with a maximum of two 5 minute breaks) to ensure the system has sufficient visibility of you at all times:
These closed book remote arrangements replaced the previous open book online exams which most institutions operated during the pandemic. The reason given was that the exam results shot up so much that widespread cheating was suspected and the integrity of the qualifications was at risk. The IFoA’s latest assessment regulations can be found here.
The belief in examinations is very widespread. A couple of months ago I was discussing the teacher assessments which replaced them briefly during the pandemic with a secondary business studies teacher. He took great pride in the fact that he based his assessments solely on mock results, ie an assessment carried out before all of the syllabus had been covered and when students were unaware it would be the final assessment. But still in his mind more “objective” than any opinion he might have of his own students.
If a large language model can perform enormously better in an examination than your students can without it, what it actually demonstrates is that the traditional examination is woefully unprepared for the future. As Carlo Iacono puts it:
The machines learned from us.
They learned what we actually valued and it turned out to be different from what we said we valued.
We said we valued originality. We rewarded conformity to genre. We said we valued depth. We measured surface features. We said we valued critical thinking. We gave higher marks to confident assertion than to honest uncertainty.
So now the machines produce what the world trained them to produce: fluent, confident, passable output that fits the shapes we reward.
And we’re horrified. Not because they stole something from us. Because they showed us what the systems were selecting for all along.
The scandal isn’t that a model can imitate student writing. The scandal is that we built an educational and professional culture where imitation passes as competence, and then acted shocked when a machine learned to imitate faster.
We trained the incentives. We trained the rubrics. We trained the career ladders.
The pattern recognition which gets you through most formal examinations is just too cheap and easy to automate now. It is no longer a useful skill, even by proxy. It might as well be Hogwarts’ sorting hat for all the use it is in a post scarcity education world. If the machines have worked out how to unlock the elaborate captcha system we have placed around our gold standard assessments, an arms race of security measures protecting a range of tests which look increasingly narrow compared to the capabilities which matter does not seem like the way to go.
What instead we are doing is identifying which students are prepared to put themselves through literally anything to get the qualification. Companies like students like that. They will make ideal reverse-centaurs. The description of life as a reverse-centaur even sounds like the experience of a proctored exam:
Like an Amazon delivery driver, who sits in a cabin surrounded by AI cameras, that monitor the driver’s eyes and take points off if the driver looks in a proscribed direction, and monitors the driver’s mouth because singing isn’t allowed on the job, and rats the driver out to the boss if they don’t make quota.
The driver is in that van because the van can’t drive itself and can’t get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance. But the driver is human, so the van doesn’t just use the driver. The van uses the driver up.
And, even if you are OK with all of that, all of these privacy intrusions don’t even work to prevent cheating! The ACCA, the world’s largest accounting professional body, has just announced it is stopping all remote exams after giving up the arms race against the cheats, facilitated in some cases seemingly by their Big Four employers lying about what had gone on.
Actuarial exams started in 1850, only 2 years after the Institute of Actuaries was established (Dermot Grenham wrote about them recently here). This pre-dated the establishment of the first examination boards by a few years (1856 Society of Arts, the Society for the encouragement of Arts, Manufactures and Commerce, later the Royal Society for the encouragement of Arts, Manufactures and Commerce (Royal Society of Arts); 1857: University of Oxford Delegacy of Local Examinations (founded by the University of Oxford); and 1858: University of Cambridge Local Examinations Syndicate (UCLES, founded by the University of Cambridge)), so keen were actuaries to institute examinations. However it was the massive expansion of the middle classes as the Industrial Revolution disrupted society in so many ways that led to the need for a new sorting hat beyond the capacity of the oral examinations that had previously been the norm.
Now people seem to be lining up to drag everyone back into the examination hall. Any suggestion of a retreat from traditional exams is met by howls of outrage from people like Sam Leith at The Spectator about lack of “rigour”. However, in my view, they are wrong.
Yes of course you can isolate students from every intellectual aid they would normally use, as a centaur, to augment their performance, limit the sources they can access, force them to rely on their own memories entirely, and put them under significant time pressure. You will definitely reduce marks by doing that. So that has made it harder and therefore more rigorous and more objective, right?
Well according to the Merriam-Webster dictionary, rigorous is a synonym of rigid, strict or stringent. However, while all these words mean extremely severe or stern, rigorous implies the imposition of hardship and difficulty. So promoting exams above all as an exercise in rigour reveals their true nature as a kind of punishment beating in written form, for which the prize for undergoing it is whatever it qualifies you for. Suddenly the sorting hat looks relatively less arbitrary.
The problems of traditional exams are well known, but the most important ones in my view are that they measure a limited range of abilities and therefore are unlikely to show what students can really achieve. Harder does not mean more objective. It is like deciding who can act by throwing students out, one at a time, in front of a baying mob of, let’s say for argument, readers of The Spectator. Sure, some of the students might be able to calm the crowd, some may even be able to redirect their anger towards a different target. But are the people who can play Mark Antony for real necessarily the best all-round actors? And has someone who can only stand frozen on the spot under those circumstances really proved that they could never act well?
It also means that education ends a month or more before the exams, to allow the appropriate cramming, followed by engaging all of the teaching staff in the extended exercise of marking, checking and moderating what has been written in answer to academic questions about what the students can remember unaided about the content of their lecture courses and reading lists with a biro on a pad of paper perched precariously on a tiny wooden table surrounded by hundreds of other similar scribblers, for a set period of time as minders wander the floors like Victorian factory owners. But what if instead the assessment was part of the teaching process? What if students felt that their assessment had been a meaningful part of their educational experience? What if, instead of arguing the toss over whether they scored 68% or 70% on an assessment, students could see for themselves whether they had demonstrated mastery of their subject.
One model of assessment which is getting a lot of attention at the moment, one I am a big fan of having used it at the University of Leicester on some modules, is something called interactive oral assessment, where students meet with a lecturer or tutor, individually or in a small group, and answer questions about work they have already submitted. It is a highly demanding form of assessment, for both the students and the assessors, but it means the final assessment is done with the student present and, with careful probing from the assessors, who will obviously need to have done a close reading of the project work beforehand, you can be highly confident of the degree to which the student understands the work they have submitted. It also allows the student to submit a piece of work of more complexity and ambition than can be accommodated by a traditional exam. And it needn’t take any more time if the interviews are carried out online when set against the exam marking time of the traditional exam. Something which all the technology we developed through the pandemic allows us to do, without the need for spyware.
There are other models which also assess the technological centaurs we wish our students to become rather than the reverse-centaurs we are currently dooming too many to become. It is looking like it may be time to start telling students to stop writing and to put down their pens on the traditional exam. And perhaps the actuarial profession, who led us into the era of professional written examinations so enthusiastically 175 years ago, might now want to take the lead in navigating our way out of them?
So this is my 42nd blog post of the year and the 8th where I have referenced Cory Doctorow. Thought it was more to be honest, so influential has he been on my thought, particularly as I have delved deeper into what, how and why the AI Rush is proceeding and what it means for the people exiting universities over the next few years.
Yesterday Cory published a reminder of his book reviews this year. He is an amazing book reviewer. There are 24 on the list this year, and I want to read every one of them on the strength of his reviews alone.
I would like to repay the compliment by reviewing his latest book: Enshittification (the other publication this year – Picks and Shovels – is also well worth your time by the way). Can’t believe this wasn’t the word of the year rather than rage bait, as it explains considerably more about the times we are living in.
I have been a fan of Doctorow for a couple of years now. I had had Walkaway sat on my shelves for a few years before I read it and was immediately enthralled by his tale of a post scarcity future which had still somehow descended into an inter-generational power struggle hellscape. I moved on to the Little Brother books, now being reenacted by Trump with his ICE force in one major US city after another. Followed those up with The Lost Cause, where the teenagers try desperately to bridge the gap across the generations with MAGA people, with tragic results along the way but a grim determination at the end “the surest way to lose is to stop running”. From there I migrated to the Marty Hench thrillers, his non-fiction The Internet Con (which details the argument for interoperability, ie the ability of any platform to interact with another) and his short fiction (I loved Radicalised, not just for the grimly prophetic Radicalised novella in the collection, but also the gleeful insanity of Unauthorised Bread). I highly recommend them all.
I came to Enshittification after reading his Pluralistic blog most days for the last year and a half, so was initially disappointed to find very little new as I started working my way through it. However what the first two parts – The Natural History and The Pathology – are is a patient explanation of the concept of enshittification and how it operates assuming no previous engagement with the term, all in one place.
Enshittifcation, as defined by Cory Doctorow, proceeds as follows:
First, platforms are good to their users.
Then they abuse their users to make things better for their business customers.
Next, they abuse those business customers to claw back all the value for themselves.
Finally, they have become a giant pile of shit.
So far, so familiar. But then I got to Part Three, explaining The Epidemiology of enshittification, and the book took off for me. The erosion of antitrust (what we would call competition) law since Carter. “Antitrust’s Vietnam” (how Robert Bork described the 12 years IBM fought and outspent the US Department of Justice year after year defending their monopolisation case) until Reagan became President. How this led to an opening to develop the operating system for IBM when it entered the personal computer market. How this led to Microsoft, etc. Then how the death of competition also killed Big Tech regulation ( regulating a competitive market which acts against collusion is much easier than regulating one with a small number of big players which absolutely will collude with each other).
And then we get to my favourite chapter of the book “Reverse-Centaurs and Chickenisation”. Any regular reader of this blog will already be familiar with what a reverse centaur is, although Cory has developed a snappy definition in the process of writing this book:
A reverse-centaur is a machine that uses a human to accomplish more than the machine could manage on its own.
And if that isn’t chilling enough for you, the description of the practices of poultry packers and how they control the lives of the nominally self-employed chicken farmers of the US, and how these have now been exported to companies like Amazon and Arise and Uber, should certainly be. The prankster who collected up the bottled piss of the Amazon drivers who weren’t allowed a loo break and resold it on Amazon‘s own platform as “a bitter lemon drink” called Release Energy, which Amazon then recategorised as a beverage without asking for any documentation to prove it was fit to drink and then, when it was so successful it topped their sales chart, rang the prankster up to discuss using Amazon for shipping and fulfillment – this was a rare moment of hilarity in a generally sordid tale of utter exploitation. My favourite bit is when he gets on to the production of his own digital rights management (DRM) free audio versions of his own books.
The central point of the DRM issue is, as Cory puts it, “how perverse DMCA 1201 is”:
If I, as the author, narrator, and investor in an audiobook, allow Amazon to sell you that book and later want to provide you with a tool so you can take your book to a rival platform, I will be committing a felony punishable by a five-year prison sentence and a $500,000 fine.
To put this in perspective: If you were to simply locate this book on a pirate torrent site and download it without paying for it, your penalty under copyright law is substantially less punitive than the penalty I would face for helping you remove the audiobook I made from Amazon’s walled garden. What’s more, if you were to visit a truck stop and shoplift my audiobook on CD from a spinner rack, you would face a significantly lighter penalty for stealing a physical item than I would for providing you with the means to take a copyrighted work that I created and financed out of the Amazon ecosystem. Finally, if you were to hijack the truck that delivers that CD to the truck stop and steal an entire fifty-three-foot trailer full of audiobooks, you would likely face a shorter prison sentence than I would for helping you break the DRM on a title I own.
DMCA1201 is the big break on interoperability. It is the reason, if you have a HP printer, you have to pay $10,000 a gallon for ink or risk committing a criminal offence by “circumventing an access control” (which is the software HP have installed on their printers to stop you using anyone else’s printer cartridges). And the reason for the increasing insistence on computer chips in everything from toasters (see “Unauthorised Bread” for where this could lead) to wheelchairs – so that using them in ways the manufacturer and its shareholders disapprove of becomes illegal.
The one last bastion against enshittification by Big Tech was the tech workers themselves. Then the US tech sector laid off 260,000 workers in 2023 and a further 100,000 in the first half of 2024.
In case you are feeling a little depressed (and hopefully very angry too) at this stage, Part 4 is called The Cure. This details the four forces that can discipline Big Tech and how they can all be revived, namely:
Competition
Regulation
Interoperability
Tech worker power
As Cory concludes the book:
Martin Luther King Jr once said, “It may be true that the law cannot make a man love me, but it can stop him lynching me, and I think that’s pretty important, also.”
And it may be true that the law can’t force corporate sociopaths to conceive of you as a human being entitled to dignity and fair treatment, and not just an ambulatory wallet, a supply of gut bacteria for the immortal colony organism that is a limited liability corporation.
But it can make that exec fear you enough to treat you fairly and afford you dignity, even if he doesn’t think you deserve it.
And I think that’s pretty important.
I was reading Enshittification on the train journey back from Hereford after visiting the Hay Winter Weekend, where I had listened to, amongst others, the oh-I’m-totally-not-working-for-Meta-any-more-but-somehow-haven’t-got-a-single-critical-word-to-say-about-them former Deputy Prime Minister Nick Clegg. While I was on the train, a man across the aisle had taken the decision to conduct a conversation with first Google and then Apple on speaker phone. A particular highlight was him just shouting “no, no, no!” at Google‘s bot trying to give him options. He had already been to the Vodaphone shop that morning and was on his way to an appointment which he couldn’t get at the Apple Store on New Street in Birmingham. He spotted the title of my book and, when I told him what enshittification meant, and how it might make some sense out of the predicament he found himself in, took a photo of the cover.
My feeling is that enshittification goes beyond Big Tech. It is the defining industrial battle of our times. We shouldn’t primarily worry about whether it is coming from the private or the public sector, as enshittification can happen in both places: from hollowing out justice to “paying more for medicines… at the exact moment we can’t afford to pay enough doctors to prescribe them” in the public sector, where we already reside within the Government’s walled garden, to all of the outrages mentioned above and more in the private sector.
The PFI local health hubs set out in last week’s budget take us back to perhaps the ultimate enshittificatory contracts the Government ever entered into, certainly before the pandemic. The Government got locked into 40 year contracts, took all the risk, and all the profit was privatised. The turbo-charging of the original PFI came out of the Blair-Brown government’s mania for keeping capital spending off the balance sheet in defence of Gordon Brown’s “Golden Rule” which has now been replaced by Rachel Reeves’ equally enshittifying fiscal rules. All the profits (or, increasingly, rents, as Doctorow discusses in the chapter on Varoufakis’ concept of Technofeudalism) from turning the offer to shit always seem to end up in the private sector. The battle is against enshittification from both private and, by proxy, via public monopolies.
Enshittification is, ultimately, a positive and empowering book which I strongly recommend you buy, avoiding Amazon if you can. We can have a better internet than this. We can strike a better deal with Big Tech over how we run our lives. But the surest way to lose is to stop running.
And next time a dead-eyed Amazon driver turns up at your door, be nice, they are probably having a worse day than you are.
A couple of weeks ago I wanted to find an article I had written about heat pumps to check something. So I Googled weknow0 and heat pump. This did give me the article, from December 2022, I was after, but also an “AI overview” that I hadn’t requested. The above is what it told me.
Now this is inaccurate on a number of counts. Firstly, I have published 226 articles over the more than 12 years I have been writing on weknow0.co.uk and I have only mentioned heat pumps in two of these. These articles did focus on the points mentioned in 3 of the 4 bullet points above and in one of them I also set out how the market at the time (December 2022) was stacked against anyone acquiring a heat pump, a state of affairs which has thankfully improved considerably since. However to claim that my blog “provides a consumer-focused perspective in the practicalities and challenges of domestic heat pump adoption in the UK” is clearly hilarious.
In fact anyone seeing that would assume I talked about little other than heat pumps, so I decided to do a search on something else that I talk about infrequently and see what I got (I searched “weknow0 science fiction”):
This seems a considerably better summary of the recent activity on the blog, which is also unrecognisable as the blog summarised in response to the previous search.
Right at the end, it suggests a reason for the title of the blog which isn’t an unreasonable guess from a regular reader. But guess it still is, and it does not appear to have processed the significant number of blog posts with variants of we know zero in the title to fine tune its take.
So someone using the AI overview as a research tool would get a completely different view of what the blog was about depending upon which other word they used alongside weknow0. Perhaps that doesn’t matter too much to anyone other than me in this case, but it is part of a broader issue. It is not summarising the website it is suggesting it is summarising.
Of course many of you will now be shouting at me that I need to give the system more focused prompts. There is now a whole area of expertise, lectured in and written about at considerable length, called “prompt engineering”. There are senior professionals who have rarely given their juniors the time of day for years, giving the tersest responses to their completely reasonable queries about the barely intelligible instructions they have given for a piece of work, suddenly prepared to spend hours and hours on prompt engineering so that the Metal Mickey in their phone or laptop can give them responses closer to what they were actually looking for.
At this point, perhaps we should perhaps hear from Sundar Pichai, the Google CEO:
As part of Faisal Islam’s slightly gushing interview with Pichai, we learn that the AI overview on Google is “prone to errors” and needs to be used alongside such things as Google search. “Use them for what they are good at but don’t blindly trust them” he says of his tools which he admits to currently investing $90 billion a year in. This is of course a problem, as one of the reasons people are reluctantly resorting to the AI overview is because the basic Google search has become so enshittified.
And that kind of echoes what Cory Doctorow has said about Google. Google need to maintain a narrative about growth. You will have picked this up if you watched the Pichai interview above, from the breathless stuff about “one of the most powerful men in the world” “perhaps being one of the easier things for AI to replicate one day” to:
You don’t want to constrain an economy based on energy. That will have consequences.
To the even more breathless stuff about us being 5 years from quantum computing being where generative AI is now.
The reason for all the growth talk, according to Doctorow, is that Google needs to be growing for it to be able to maintain a price earnings ratio of 20 to 1, rather than the more typical 4 to 1 of a mature business. So it’s all about the share price. As Doctorow says:
Which is why Google is so desperately sweaty to maintain the narrative about its growth. That’s a difficult narrative to maintain, though. Google has 90% Search market-share, and nothing short of raising a billion humans to maturity and training them to be Google users (AKA “Google Classroom”) will produce any growth in its Search market-share. Google is so desperate to juice its search revenue that it actually made search worse on purpose so that you would have to run multiple searches (and see multiple rounds of ads) before you got the information you were seeking.
Investors have metabolized the story that AI will be a gigantic growth area, and so all the tech giants are in a battle to prove to investors that they will dominate AI as they dominated their own niches. You aren’t the target for AI, investors are: if they can be convinced that Google’s 90% Search market share will soon be joined by a 90% AI market share, they will continue to treat this decidedly tired and run-down company like a prize racehorse at the starting-gate.
This is why you are so often tricked into using AI, by accidentally grazing a part of your screen with a fingertip, summoning up a pestersome chatbot that requires six taps and ten seconds to banish: companies like Google have made their product teams’ bonuses contingent on getting normies to “use” AI and “use” is defined as “interact with AI for at least ten seconds.” Goodhart’s Law (“any metric becomes a target”) has turned every product you use into a trap for the unwary.
So here we are. AI isn’t meant for most of you, its results are “prone to errors” and need to be used alongside other corroborating material or “human validation”. It needs you to take a course in prompt engineering even if you never did the same to manage any of your human staff. It is primarily designed to persuade investors to keep the share price up to the levels the Board of Alphabet Inc have become accustomed to.
In my last post I referred to Dan Wang’s excellent new book, Breakneck, which I have now read at (for me) breakneck speed, finishing it in a week. It has made me realise how very little I knew about China.
Wang makes the point that China today is reminiscent of the US of a century ago. However he also makes the point that parts of the US were terrible to live in then: from racist segregation and lack of representation, to massive industrial pollution and insensitive planning decisions. As he says of the US:
The public soured on the idea of broad deference to US technocrats and engineers: urban planners (who were uprooting whole neighborhoods), defense officials (who were prosecuting the war in Vietnam), and industry regulators (who were cozying up to companies).
China meanwhile has a Politburo stuffed with engineers and is capable of making snap decisions without much regard to what people want. There is a sense of precarity about life there, with people treated as aggregates rather than as individuals. The country can take off in different directions very quickly and often does – there is a telling passage about the totally different life experiences of someone born in 1959 compared to someone born in 1949 (the worst year to be born in China according to Wang) – and even the elites can be dealt with brutally if they fall out of line with the current direction of travel. But they have created some impressive infrastructure, something which has become problematic for the US. Only around 10% of its GDP goes towards social spending, compared to 20% in the US and 30% amongst some European states, so there is no effective safety net. Think of the US portrayed in (as Christmas is fast approaching) “It’s a Wonderful Life” – a life that is hard to the point of brutality with destitution only one mistake away. And there is a level of social control alien to the west, controlling where people can live and work and very repressive of ethnoreligious minorities. And yet there is a feeling of progress and forward momentum which appears to be popular with most people in China.
As Wang notes at the end of his introduction:
“Breakneck” is the story of the Chinese state that yanked its people into modernity – an action rightfully envied by much of the world – using means that ran roughshod over many – an approach rightfully disdained by much of the world. It is also a reminder that the United States once knew the virtues of speed and ambitious construction.
The chapter on the one child policy, which ran for 35 years, is particularly chilling (China announced its first population fall in 2023 and its population is projected to halve to 700 million by 2100), and now the pressure is on women to have more children again. There is also a chapter on how China dealt with Covid – Wang experienced this first hand from Shanghai for 3 years – which made me understand perhaps why we wasted so much money in the UK on Track and Trace. You would need to be an engineering state to see it through successfully, and China ended up taking it too far in the end.
The economics of China is really interesting. As Wang notes:
China’s overbuilding has produced deep social, financial and environmental costs. The United States has no need to emulate it uncritically. But the Chinese experience does offer political lessons for America. China has shown that financial constraints are less binding than they are cracked up to be. As John Maynard Keynes said, “Anything we can actually do we can afford.” For an infrastructure-starved place like the United States, construction can generate long-run gains from higher economic activity that eventually surpass the immediate construction costs. And the experience of building big in underserved places is a means of redistribution that makes locals happy while satisfying fiscal conservatives who are normally skeptical of welfare payments.
This goes just as much for the UK, where pretty much everywhere outside London is infrastructure-starved (and, as Nicholas Shaxson and John Christensen show here in their written evidence to a UK Parliamentary Committee, even where infrastructure is built outside London, the financing of it sucks money away from the area where the infrastructure is being built and towards finance centres, predominantly in London), but there is also strong resistance from all the main parties to significant redistribution via the benefit system. This results in inequalities which even the FT feels moved to comment on and a map of multiple deprivation in England which looks like this:
The good news is that it doesn’t have to be this way in the UK, there are prominent examples of countries operating in a different way, eg China. The bad news is that China is not doing it because of economics. They are doing it because the state was set up to build big from the beginning. It is in its nature. The lesson of China is that it will keep doing the same things whatever the situation (eg trying to fix the population fall caused by an engineering solution with another engineering solution). Sometimes the world economy will reward their approach and sometimes it will punish it, but that will not be the primary driver for how they behave. I think this may be true of the US, the EU states and the UK too.
Daniel Kahneman showed us in Thinking Fast and Slow, how most of our mental space is used to rationalise decisions we have already taken. One of the places where I part company with Wang is in his reverence for economists. He believes that the US should listen more to both engineers and economists to challenge the lawyerly society.
In the foreword for The Principles of Economics Course from 1990 by Phillip Saunders and William Walstad, Paul Samuelson, the first person from the US to win the Nobel Memorial Prize in Economic Sciences in 1970, wrote:
“Poets are the unacknowledged legislators of the World.” It was a poet who said that, exercising occupational license. Some sage, it may have been I, declared in similar vein: “I don’t care who writes a nation’s laws—or crafts its advanced treaties—if I can write its economic textbooks.” The first lick is the privileged one, impinging on the beginner’s tabula rasa at its most impressionable state.
My view would be that the economists are already in charge.
As a result, my fear is that economics is now used for rationalising decisions we have already made in many countries now, including our own. We are going to do what we are going to do. The economics is just the fig leaf we use to rationalise what may otherwise appear unfair, cruel, divisive and hope-denying policies. The financial constraints are less than they are cracked up to be, but they are a convenient fiction for a government which lacks any guiding principles for spending and investment otherwise and therefore fears that everyone would just be asking for more resources in its absence, and they would have no way of deciding between them.
New (left) and old (right) Naiku shrines during the 60th sengu at Ise Jingu, 1973, via Bock 1974
In his excellent new book, Breakneck, Dan Wang tells the story of the high-speed rail links which started to be constructed in 2008 between San Francisco and Los Angeles and between Beijing and Shanghai respectively. Both routes would be around 800 miles long when finished. The Beijing-Shanghai line opened in 2011 at a cost of $36 billion. To date, California has built only a small stretch of their line, as yet nowhere near either Los Angeles or San Francisco, and the latest estimate of the completed bill is $128 billion. Wang uses this, amongst other examples to draw a distinction between the engineering state of China “building big at breakneck speed” and the lawyerly society of the United States “blocking everything it can, good and bad”.
Europe doesn’t get much of a mention, other than to be described as a “mausoleum”, which sounds rather JD Vance and there is quite a lot about this book that I disagree with strongly, which I will return to. However there is also much to agree with in this book, and none more so than when Wang talks about process knowledge.
Wang tells another story, of Ise Jingu in Japan. Every 20 years exact copies of Naiku, Geku, and 14 other shrines here are built on vacant adjacent sites, after which the old shrines are demolished. Altogether 65 buildings, bridges, fences, and other structures are rebuilt this way. They were first built in 690. In 2033, they will be rebuilt for the 63rd time. The structures are built each time with the original 7th century techniques which involve no nails, just dowels and wood joints. Staff have a 200 year tree planting plan to ensure enough cypress trees are planted to make the surrounding forest self-sufficient. The 20 year intervals between rebuilding are the length of the generations, the older passing on the techniques to the younger.
This, rather like the oral tradition of folk stories and songs, which were passed on by each generation as contemporary narratives until they were all written down and fixed in time so that they quickly appeared old-fashioned thereafter, is an extreme example of process knowledge. What is being preserved is not the Trigger’s Broom of temples at Ise Jingu, but the practical knowledge of how to rebuild them as they were originally built.
Process knowledge is the know-how of your experienced workforce that cannot easily be written down. It can develop where such a workforce work closely with researchers and engineers to create feedback loops which can also accelerate innovation. Wang contrasts Shenzhen in China where such a community exists, with Silicon Valley where it doesn’t, forcing the United States to have such technological wonders as the iPhone manufactured in China.
China has recognised the supreme importance of process knowledge as compared to the American concern with intellectual property (IP). IP can of course be bought and sold as a commodity and owned as capital, whereas process knowledge tends to rest within a skilled workforce.
This may then be the path to resilience for the skilled workers of the future in the face of the AI-ification of their professions. Companies are being sold AI systems for many things at the moment, some of which will clearly not work with few enough errors, or without so much “human validation” (a lovely phrase a good friend of mine actively involved in integrating AI systems into his manufacturing processes used recently) that they are not deemed practical. For early career workers entering these fields the demonstration of appropriate process knowledge, or the ability to develop it very quickly, may be the key to surviving the AI roller coaster they face over the next few years. Actionable skills and knowledge which allow them to manage such systems rather than being managed by them. To be a centaur rather than a reverse-centaur.
Not only will such skills make you less likely to lose your job to an AI system, they will also increase your value on the employment market: the harder these skills and knowledge are to acquire, the more valuable they are likely to be. But whereas in the past, in a more static market, merely passing your exams and learning coding might have been enough for an actuarial student for instance, the dynamic situation which sees everything that can be written down disappearing into prompts in some AI system will make such roles unprotected.
Instead it will be the knowledge about how people are likely to respond to what you say in a meeting or write in an email or report, and the skill to strategise around those things, knowing what to do when the rules run out, when situations are genuinely novel, ie putting yourself in someone else’s shoes and being prepared to make judgements. It will be the knowledge about what matters in a body of data, putting the pieces together in meaningful ways, and the skills to make that obvious to your audience. It will be the knowledge about what makes everyone in your team tick and the skills to use that knowledge to motivate them to do their best work. It will ultimately be about maintaining independent thought: the knowledge of why you are where you are and the skill to recognise what you can do for the people around you.
These have not always been seen as entry level skills and knowledge for graduates, but they are increasingly going to need to be as the requirement grows to plug you in further up an organisation if at all as that organisation pursues its diamond strategy or something similar. And alongside all this you will need a continuing professional self-development programme on steroids going on to fully understand the systems you are working with as quickly as possible and then understand them all over again when they get updated, demanding evidence and transparency and maintaining appropriate uncertainty when certainty would be more comfortable for the people around you, so that you can manage these systems into the areas where they can actually add value and out of the areas where they can cause devastation. It will be more challenging than transmitting the knowledge to build a temple out of hay and wood 20 years into the future, and will be continuous. Think of it as the Trigger’s Broom Process of Career Management if you like.
These will be essential roles for our economic future: to save these organisations from both themselves and their very expensive systems. It will be both enthralling and rewarding for those up to the challenge.
I have been watching Daniel Susskind’s lectures on AI and the future of work this week: Automation Anxiety was delivered in September and The Economics of Work and Technology earlier this week. The next in the series, entitled Economics and Artificial Intelligence is scheduled for 13 January. They are all free and I highly recommend them for their great range of source material presented.
In my view the most telling graph, which featured in both lectures, was this one:
Susskind extended the usual concept of the ratio between average college and university graduate salaries to those of school leavers to include the equivalent ratio of craftsmen to labourers which then gives us data back to 1220. There are two big collapses in this ratio in the data: that following the Black Death (1346-1353), which may have killed 50% of Europe’s 14th century population, and the Industrial Revolution (which slow singularity started around 1760 and then took us through the horrors of the First World War and the Great Depression before the graph finally picks up post Bretton Woods).
As Susskind shows, the profits from the Industrial Revolution were not going to workers:
This, from 2019, introduced the idea that the picture is now more complex than high-skilled and low-skilled workers, now there is a middle. And, as Autor has set out more recently, the middle is getting squeezed:
Key dynamics at play include:
Labor Share Decline: OECD data reveal a 3–5 percentage point drop in labor’s share of income in sectors most exposed to AI, a trend likely to accelerate as automation deepens.
Wage Polarization: The labor market is bifurcating. On one end, high-complexity “sense-making” roles; on the other, low-skill service jobs. The middle is squeezed, amplifying both political risk and regulatory scrutiny.
Productivity Paradox 2.0: Despite the promise of AI-driven efficiency, productivity gains remain elusive. The real challenge is not layering chatbots atop legacy processes, but re-architecting workflows from the ground up—a costly and complex endeavor.
For enterprise leaders, the implications are profound. AI is best understood not as a job destroyer, but as a “skill-lowering” platform. It enables internal labor arbitrage, shifting work toward judgment-intensive, context-rich tasks while automating the rest. The risk is not just technological—it is deeply human. Skill depreciation now sits alongside cyber and climate risk on the board agenda, demanding rigorous workforce-reskilling strategies and a keen eye on brand equity as a form of social license.
So, even if the overall number of jobs may not be reduced, the case being made is that the average skill level required to carry them out will be. As Susskind said, the Luddites may have been wrong about the spinning jenny replacing jobs, but it did replace and transform tasks and its impact on workers was to reduce their pay, quality of work, status as craftsmen and economic power. This looks like the threat being made by employers once again, with real UK wages already still only at the level they were at in 2008:
However this is where I part company with Susskind’s presentation, which has an implicit inevitability to it. The message is that these are economic forces we can’t fight against. When he discusses whether the substituting force (where AI replaces you) or the complementing force (where AI helps you to be more productive and increases the demand for your work) will be greater, it is almost as if we have no part to play in this. There is some cognitive dissonance when he quotes Blake, Engels, Marx and Ruskin about the horrors of living through such times, but on the whole it is presented as just a natural historical process that the whole of the profits from the massive increases in productivity of the Industrial Revolution should have ended up in the pockets of the fat guys in waistcoats:
Richard Arkwright, Sir Robert Peel, John Wilkinson and Josiah Wedgwood
I was recently at Cragside in Northumberland, where the arms inventor and dealer William Armstrong used the immense amount of money he made from selling big guns (as well as big cranes and the hydraulic mechanism which powers Tower Bridge) to decking out his house and grounds with the five artificial lakes required to power the world’s first hydro-electric lighting system. His 300 staff ran around, like good reverse-centaurs, trying to keep his various inventions from passenger lifts to an automated spit roast from breaking down, so that he could impress his long list of guests and potential clients to Cragside, from the Shah of Persia to the King of Siam and two future Prime Ministers of Japan. He made sure they were kept running around with a series of clock chimes throughout the day:
However, with some poetic irony, the “estate regulator” is what has since brought the entire mechanism crashing to a halt:
Which brings me to Wallace and Gromit. Wallace is the inventor, heedless of the impact of his inventions on those around him and especially on his closest friend Gromit, who he regularly dumps when he becomes inconvenient to his plans. Gromit just tries to keep everything working.
Wallace is a cheese-eating monster who cannot be assessed purely on the basis of his inventions. And neither can Armstrong, Arkwright, Peel, Wilkinson or Wedgwood. We are in the process of allowing a similar domination of our affairs by our new monsters:
Meta CEO Mark Zuckerberg beside Amazon CEO Jeff Bezos and his fiancée (now wife) Lauren, Google CEO Sundar Pichai and Elon Musk at President Trump’s 2nd Inauguration.
Around half an hour into his second lecture, Daniel Susskind started talking about pies. This is the GDP pie (Susskind has also written a recent book on Growth: A Reckoning, which argues that GDP growth can go on forever – my view would be closer to the critique here from Steve Keen) which, as Susskind says, increased by a factor of 113 in the UK between 1700 and 2000. But, as Steve Keen says:
The statistics strongly support Jevons’ perspective that energy—and specifically, energy from coal—caused rising living standards in the UK (see Figure 2). Coal, and not a hypothesised change in culture, propelled the rise in living standards that Susskind attributes to intangible ideas.
Susskind talks about the productivity effect, he talks about the bigger pie effect and then he talks about the changing pie effect (ie changes to the types of work we do – think of the changes in the CPI basket of goods and services) as ways in which jobs are created by technological change. However he has nothing to say about just giving less of the pie to the monsters. Instead for Susskind the AI Rush is all about clever people throwing 10 times the amount of money at AI as was directed at the Manhattan Project and the heads of OpenAI, Anthropic and Google DeepMind stating that AI will replace humans in all economically useful tasks in 10 years, a claim which he says we should take seriously. Cory Doctorow, amongst others, disagrees. In his latest piece, When AI prophecy fails, he has this to say about why companies have reduced recruitment despite the underperformance of AI systems to date:
All this can feel improbable. Would bosses really fire workers on the promise of eventual AI replacements, leaving themselves with big bills for AI and falling revenues as the absence of those workers is felt?
The answer is a resounding yes. The AI industry has done such a good job of convincing bosses that AI can do their workers’ jobs that each boss for whom AI fails assumes that they’ve done something wrong. This is a familiar dynamic in con-jobs.
The Industrial Revolution had a distribution problem which gave birth to Chartism, Marxism, the Trades Union movement and the Labour Party in the UK alone. And all of that activity only very slowly chipped away at the wealth share of the top 10%:
However the monsters of the Industrial Revoution did at least have solid proof that they could deliver what they promised. You don’t get more concrete a proof of concept than this after all:
What the monsters of the AI Rush lack is anything tangible to support their increasingly ambitious assertions. Wallace may be full of shit. And the rest of us can all just play a Gromit-like support role until we find out one way or the other or concentrate on what builds resilient communities instead.
Whether you think the claims for the potential of AI are exaggerated; or that the giant bet on it that the US stock market has made will end in an enormous depression; or that the energy demands of this developing technology will be its constraining force ultimately; or that we are all just making the world a colder place by prioritising systems, however capable, over people: take your pick as a reason to push back against the AI Rush. But my bet would be on the next 10 years not being dominated by breathless commentary on the exploits of Tech Bros.