Het Scheepvaartmuseum, Amsterdam, in the fog. Another museum which is well worth a visit

To be read to the accompaniment of Lindisfarne singing Fog on the Tyne, or possibly Kate Bush singing The Fog.

Reporting on AI is all over the place, in both meanings of that phrase. Some think it is very dangerous but that the people working on it should be trusted to police it themselves. Some are retreating from prediction but are instead trying to draw a coastline “knowing the interior is mostly fog”. Some are playing war games in the Arctic with different LLMs. But everyone seems fairly confident they have a hot take. I wonder.

The book I finished this weekend had a passage about a first experiment with a new substance which could shield against gravity. Mr Cavor, the rather unworldly scientist, is explaining to Mr Bedford, a man with no obvious talents other than to look for a quick buck where he can find one, what would have happened if his substance, Cavorite, had not got dislodged fairly quickly from where they had positioned it:

“You perceive,” he said, “it formed a sort of atmospheric fountain, a kind of chimney in the atmosphere. And if the Cavorite itself hadn’t been loose and so got sucked up the chimney, does it occur to you what
would have happened?”

I thought. “I suppose,” I said, “the air would be rushing up and up over that infernal piece of stuff now.”

“Precisely,” he said. “A huge fountain—”

“Spouting into space! Good heavens! Why, it would have squirted all the atmosphere of the earth away! It would have robbed the world of air! It would have been the death of all mankind! That little lump of stuff!”

“Not exactly into space,” said Cavor, “but as bad—practically. It would have whipped the air off the world as one peels a banana, and flung it thousands of miles. It would have dropped back again, of course—but on an asphyxiated world! From our point of view very little better than if it never came back!”

I stared. As yet I was too amazed to realise how all my expectations had been upset. “What do you mean to do now?” I asked.

“In the first place if I may borrow a garden trowel I will remove some of this earth with which I am encased, and then if I may avail myself of your domestic conveniences I will have a bath. This done, we will converse more at leisure. It will be wise, I think”—he laid a muddy hand on my arm—“if nothing were said of this affair beyond ourselves. I know I have caused great damage—probably even dwelling-houses may be ruined here and there upon the country-side. But on the other hand, I cannot possibly pay for the damage I have done, and if the real cause of this is published, it will lead only to heartburning and the obstruction of my work. One cannot foresee everything, you know, and I cannot consent for one moment to add the burden of practical considerations to my theorising…”

The extract is, of course, from HG Wells’ classic The First Men in the Moon, published in 1901.

In case you are in any doubt, Dario Amodei is our Mr Cavor here. I can just imagine his response to the first disaster attributed to AI research being prefaced by “one cannot foresee everything, you know…”. And there are too many Mr Bedfords out there to shake a stick at, trying to sell you anything they can possibly attribute to AI just to keep the whole thing rolling along.

I am with the fog people. The FT seem to be too, with this pair of diagrams attached to this article.

First the US, where there are tentative signs of something they can possibly use as a proxy for productivity growth as a result of using AI:

Source: https://www.ft.com/content/d6fdc04f-85cf-4358-a686-298c3de0e25b

And this one for the UK, where there aren’t:

And so it was this foggy sensibility about AI which I took with me to the Bletchley Park Museum last weekend, site of the AI Safety Summit in November 2023 which drew in the US Vice President, Kamala Harris, European Commission President Ursula von der Leyen, Elon Musk, then UK Prime Minister Rishi Sunak, Open AI’s Sam Altman, Meta’s Nick Clegg and Prof Yann LeCun, Meta’s chief AI scientist, amongst around 100 guests invited to suck their teeth about AI.

The thing that particularly struck me at Bletchley Park is that it demystified the emergence of the computer for me. The forerunner, which was the mechanisation using punch cards of the process of sorting the massive amounts of data the centre was receiving in war time, smacks of a group of people who had just run out of wall to spread their webs of cards and strings across. It was a crime investigation which had got out of hand.

A highlight for me was Alan Turing’s very prescient little note about AI, written in 1940 but anticipating the arguments which would be raging by 2026 (and how poignant that the man who probably did more than anyone to transform what we are able to do by punching a keyboard was chained to one that could only press hunks of metal against a strip of carbon onto a piece of paper):

There is also a hilarious secrecy pledge from the ancestors of the safety summit people, telling you all the ways in which you just need to shut up:

“There is an English proverb none the worse for being seven centuries old:” it thunders.

Wicked tongue breaketh bone,

Though the tongue itself hath none.

Words to live by, I’m sure we’d all agree.

What Bletchley Park was less good at was explaining how the Enigma code was cracked, despite an excellent collection of the hardware involved. For that, I recommend Simon Singh’s The Code Book.

Here was the world’s first “intelligence factory”, scaling up intelligence gathering and analysis as never before and by so doing also changing the way governments would interact with their populations, with just as many implications for our current times as the development of AI. This cluster of huts around a country house rebranded as GCHQ and moved to Cheltenham a few years after World War 2.

Path dependence is a term which describes a situation where past events or decisions constrain later events or decisions. Bletchley Park feels like the Museum of Path Dependence to me.

And the legacy of the safety summit? Well my “hot take” would be: when you are a little lost in the fog, it is generally advisable to slow down a bit and take steps to reduce your risk of breaking things. I wonder if I can get that on a bumper sticker.

In ordering #5, self-driving cars will happily drive you around, but if you tell them to drive to a car dealership, they just lock the doors and politely ask how long humans take to starve to death. Source: https://m.xkcd.com/1613/

To be read to the soundtrack of Bruce Springsteen singing Streets of Minneapolis.

My attention was drawn this week to an article by Dario Amodei, co-founder of Anthropic (a spin off from OpenAI, which was co-founded by Elon Musk and heavily invested in by Microsoft so very much part of the Magnificent 7 architecture), the creator of the large language model Claude, called The Adolescence of Technology. It is hard to overemphasise how much I disagree with everything Dario has written here, but also useful in that it is a long article, which covers a lot of ground, and allows me to define my views in opposition to it.

The irritations start pretty much straight away. So Dario quotes from a science fiction classic (Carl Sagan’s First Contact), but then follows this up under the heading of “Avoid doomerism” with this:

…but it’s my impression that during the peak of worries about AI risk in 2023–2024, some of the least sensible voices rose to the top, often through sensationalistic social media accounts. These voices used off-putting language reminiscent of religion or science fiction, and called for extreme actions without having the evidence that would justify them.

Notice the word “sensible” doing the heavy lifting there. Only science fiction endorsed by Dario will be considered. Dario wants us to consider the risks of AI in “a careful and well-considered manner”, which sounds reasonable, but then his 3rd and final bullet under this (after “avoid doomerism” and “acknowledge uncertainty”) goes as follows:

Intervene as surgically as possible. Addressing the risks of AI will require a mix of voluntary actions taken by companies (and private third-party actors) and actions taken by governments that bind everyone. The voluntary actions—both taking them and encouraging other companies to follow suit—are a no-brainer for me. I firmly believe that government actions will also be required to some extent, but these interventions are different in character because they can potentially destroy economic value or coerce unwilling actors who are skeptical of these risks (and there is some chance they are right!).

So reflexively anti regulation of his own industry, of course. And voluntary actions by corporations, an approach to solving problems which has been demonstrated not to work repeatedly, is apparently “a no-brainer”. Also it is automatically assumed that government actions will destroy value. Only market solutions will be endorsed by Dario, pretty much until they have messed up so badly you are forced to bring governments in:

To be clear, I think there’s a decent chance we eventually reach a point where much more significant action is warranted, but that will depend on stronger evidence of imminent, concrete danger than we have today, as well as enough specificity about the danger to formulate rules that have a chance of addressing it. The most constructive thing we can do today is advocate for limited rules while we learn whether or not there is evidence to support stronger ones.

There is then the expected sales pitch about what he has seen within Anthropic about the relentless “increase in AI’s cognitive capabilities”. And then the man who warned about sensationalist science fiction is off:

I think the best way to get a handle on the risks of AI is to ask the following question: suppose a literal “country of geniuses” were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist.

And the rest of the article is then off solving this imaginary problem in all its facets, rather than the wealth and power concentration problem that we actually have. The only legislation he seems to be in favour of seems to be something called “transparency legislation”, legislation which of course Anthropic would help to write.

However, after suggesting everything from isolating China and using “AI to empower democracies to resist autocracies” to private philanthropy as the solutions to his imagined problems, Dario finally and reluctantly concludes government intervention might after all be necessary as follows:

…ultimately a macroeconomic problem this large will require government intervention. The natural policy response to an enormous economic pie coupled with high inequality (due to a lack of jobs, or poorly paid jobs, for many) is progressive taxation. The tax could be general or could be targeted against AI companies in particular. Obviously tax design is complicated, and there are many ways for it to go wrong. I don’t support poorly designed tax policies. I think the extreme levels of inequality predicted in this essay justify a more robust tax policy on basic moral grounds, but I can also make a pragmatic argument to the world’s billionaires that it’s in their interest to support a good version of it: if they don’t support a good version, they’ll inevitably get a bad version designed by a mob.

That, by the way, is what Dario thinks of democracy: “a bad version designed by a mob” rather than the “good version” that he and his fellow billionaires could come up with in their own self interest. The mask has really slipped by this point. And the following section, on “Economic concentration of power”, just demonstrates that he has no effective answers at all that he deems acceptable on this. It’s just an inevitability for him.

This is what Luke Kemp’s excellent Goliath’s Curse refers to as a “Silicon Goliath”. Goliaths are dominance hierarchies which spread by dominating the areas around them. They need three conditions (which Luke calls “Goliath fuel”): lootable resources (ie resources which can be easily stolen off someone else), caged land (ie land difficult to escape from) and monopolizable weapons (ie ones which require processes which can be developed to give one society an edge over another). We are all Goliath-dwellers in “The West” now, looting resources from other countries in unequal exchanges which impoverish the Global South, with weapons (eg nuclear weapons) available only to the elite few countries and operating within the cages of heavily-policed national boundaries. The Silicon Goliath which is developing will have data as its lootable resource, mass surveillance systems providing its cages and monopolizable weapons such as killer drones. The resultant killbot hellscapes which people like Dario Amodei laughably imagine they have defences against through things like their Claude’s Constitution are almost pitiful in their inadequacy.

Nate Hagens takes Dario’s claims for AI’s cognitive capabilities much more seriously than me, and then considers the risks in a less adolescent way here. As he says:

And here’s what his essay has almost nothing about. Energy, water, materials, or ecological limits.

And also nowhere does Dario talk about the 99% of people who are just spectators in his world, other than to describe them as “the mob”. This is quite a blind spot, as Luke Kemp points out in his exhaustive study of the collapses of “Goliaths” over the last 5,000 years. “The extreme levels of inequality” predicted by Amodei in his essay are not just things we have to put up with, but the reasons the world he predicts is likely to be hugely unstable. Not created by AI, but accelerated by it. Kemp describes it as “diminishing returns on extraction”:

We see a pattern re-emerging across case studies. Societies grow more fragile over time and more prone to collapse. Threats that they had always faced such as invaders, disease and drought seem to take a heavier toll.

As societies grew bigger:

They still faced the underlying (and ongoing) problem of rising inequality creating societies where and institutions more extractive power was more concentrated.

And eventually:

The result is more extractive institutions creating growing instability, internal conflict, a drain of resources away from government, state capture by private elites, and worse decision-making. Society – especially the state – becomes more fragile. Private elites tend to take a larger share of extractive benefits. The state, and many of the power structures it helps prop up, then usually falls apart once a shock hits: for Rome it was climate change, disease, and rebelling Germanic mercenaries; for China it was often floods, droughts, disease and horseback raiders; for the west African kingdoms it was invaders and a loss of trade; for the Maya it was drought and a loss of trade; and for the Bronze Age it was drought, a disruption of trade and an earthquake storm.

The only real answer to combatting existential risks in the hands of adolescents like the Tech Bros is more democracy: over control of decision-making, over control of resources, over control of the threat of violence and over control of information. We are a long way from achieving these within our own particular Goliath at the moment, and indeed there is no sign at all that our elites are interested in achieving them. The Magnificent 7 are propping up the US stock exchange. The promise of perpetual economic growth is the progress myth of our time and leaders who do not provide it will lose the “Mandate of Heaven” in just the same way as Chinese rulers did when they were unable to prevent floods and droughts. Adam Tooze sees the signs of the inner demons of our elites starting to detach them from reality in the latest disclosures from the Epstein files:

Are we, like [Larry] Summers, fantasizing about stabilizing our desires and needs in an inherently dangerous and uncertain world? Are we kidding ourselves?

But, without those controls in place, we would need a lot more than Dario’s Anthropic playing nicely to allow this particular adolescent to grow up. And this is where I am forced to take Nate Hagens’ assessment more seriously. Because if our rulers’ Mandates of Heaven are dependent on eternal economic growth on their watch and they, rightly, think that this is not possible in our current non-AI-enhanced world but, wrongly, think it is possible in a future AI-enhanced world, then that is the way they are going to demand we go. And, if the Larry Summers fantasists really are kidding themselves, it may be very hard to talk them out of it.

I have spent many days in rooms with groups of men (always men) anxious about their future income, where I advised them on how much to ask their companies for. Most of my clients as a scheme actuary were trustees of pension schemes of companies which had seen better days, and who were struggling to make the necessary payments to secure the benefits already promised, let alone those to come. One by one, those schemes stopped offering those future benefits and just concentrated on meeting the bill for benefits already promised. If an opportunity came to buy those benefits out with an insurance company (which normally cost quite a bit more than the kind of “technical provisions” target the Pensions Regulator would accept), I lobbied hard to get it to happen. In many cases we were too late though, the company went bust and we moved it into the Pension Protection Fund instead. That was the life of a pensions actuary in the West Midlands in the noughties. I was often “Mr Good News” in those meetings, the ironic reference to the man constantly moving the goalposts for how much money the scheme needed to meet those benefits bills. I saw my role as pushing the companies to buy out funding if at all possible. None of the schemes I advised had a company behind them which could sustain ongoing pension costs long term. I would listen to the wishful thinking and the corporate optimism, smile and push for the “realistic” option of working towards buy out.

Then I went to work at a university, and found myself, for the first time since 2003, a member of an open defined benefit pension scheme. It was (and still is) a generous scheme, but was constantly complained about by the university lecturers who comprised most of its membership. I didn’t see any way that it was affordable for employers which seemed to struggle to employ enough lecturers, were very reluctant to award anything other than fixed term contracts, and had an almost feudal relationship with their PhD students and post docs. Staff went on strike about plans to close the scheme to future accrual and replace it with the most generous money purchase scheme I had ever seen. I demurred and wrote an article called Why I Won’t Strike. I watched in wonder when even actuarial lecturers at other universities enthusiastically supported the strike. However, over 10 years later, that scheme – the UK’s biggest – is still open. And I gained personally from continued active membership until 2024.

Now don’t get me wrong, I still think the UK university sector is wrong to maintain, unique amongst its peers, a defined benefit scheme. The funding requirement for it has been inflated by continued accrual over the last 8 years and therefore so has the risk it will spike at just the time when it is least affordable, a time which may soon be approaching with 45% of universities already reporting deficits. However the strike demonstrated how important the pension scheme was to staff, something the constant grumbling before the strike had led university managers to doubt. And, once the decision had been made to keep the scheme open to future accrual, I had no more to add as an actuary. Other actuaries had the responsibility for advising on funding, in fact quite a lot of others as the UCU was getting its own actuarial advice alongside that the USS was getting, but my involvement was now just that of a member, just one with a heightened awareness of the risks the employers were taking.

The reason I bring this up is because I detected something of the same position as my lonely one from the noughties amongst the group of actuaries involved in the latest joint report from the Institute and Faculty of Actuaries and the University of Exeter about the fight to maintain planetary climate solvency.

It very neatly sets out the problem, that the whole system of climate modelling and policy recommendations to date has been almost certainly underestimating how much warming is likely to result from a given increase in the level of carbon dioxide in the atmosphere. Therefore all the “carbon budgets” (amount we can emit before we hit particular temperature levels) have been assumed to be higher than they actually are and estimates for when we exhaust them have given us longer than we actually have. This is due to the masking effects of particulate pollution in the air, which has resulted in around 0.5C less warming than we would otherwise have had by now. However, efforts to remove sulphur from oil and coal fuels (themselves important for human health) have acted to reduce this aerosol cooling effect. The goalposts have moved.

An additional reference I would add to the excellent references in the report is Hansen’s Seeing the Forest for the Trees, which concisely summarises all the evidence to suggest the generally accepted range for climate sensitivity is too low.

So far, so “Mr Good News”. And for those who say this is not something actuaries should be doing because they are not climate experts, this is exactly what actuaries have always done. We started the profession by advising on the intersection between money and mortality, despite not being experts in any of the conditions which affected either the buying power of money or the conditions which affected people’s mortality. We could however use statistics to indicate how things were likely to go in general, and early instances of governments wasting quite a lot of money without a steer from people who understood statistics got us that gig, and a succession of other related gigs over the years ahead.

The difficult bit is always deciding what course of action you want to encourage once you have done the analysis. This was much easier in pensions, as there was a regulatory framework to work to. It is much harder when, as in this case, it involves proposing changes in behaviour which are ingrained into our societies. If university lecturers can oppose something that is clearly not in the long term financial interests of their employers and push for something which makes their individual employers less secure, then how much more will the general public resist change when they can see no good reason for it.

And in this regard this feels like a report mostly focused on the finance industry. The analogies it makes with the 2008 financial crash, constant comparisons with the solvency regulatory regimes of insurers in particular and even the framing of the need to mitigate climate change in order to support economic growth are all couched in terms familiar to people working in the finance sector. This has, perhaps predictably, meant that the press coverage to date has mostly been concentrated in the pension, insurance and investment areas:

However in the case of the 2008 crash, the causes were able to be addressed by restricting practices amongst the financial institutions which had just been bailed out and were therefore in no position to argue. Many of those restrictions have been loosened since, and I think many amongst the general public would question whether the decision to bail out the banks and impose austerity on everyone else is really a model to follow for other crises.

The next stage will therefore need to involve breaking out of the finance sector to communicate the message more widely, perhaps focusing on the first point in the proposed Recovery Plan: developing a different mindset. As the report says:

This challenge demands a shift in perspective, recognising that humanity is not separate from nature but embedded in it, reliant on it and, furthermore, now required to actively steward the Earth system.
To maintain Planetary Solvency, we need to put in place mechanisms to ensure our social, economic, and political systems respect the planet’s biophysical limits, thus preserving or restoring sufficient natural capital for future generations to continue receiving ecosystem services…

…The prevailing economic system is a risk driver and requires reform, as economic dependency on nature is unrecognised in dominant economic theory which incorrectly assumes that natural capital is substitutable by manufactured capital. A particular barrier to climate action has been lobbying from incumbents and misinformation which has contributed to slower than required policy implementation.

By which I assume they mean this type of lobbying:

And this is where it gets very difficult, because actuaries really do not have anything to add at this point. We are just citizens with no particular expertise about how to proceed, just a heightened awareness of the dangers we are facing if we don’t act.

But we can also, as the report does, point out that we still have agency:

Although this is daunting, it means we have agency – we can choose to manage human activity to minimise the risk of societal disruption from the loss of critical support services from nature.

This point chimes with something else I have been reading recently (and which I will be writing more about in the coming weeks): Samuel Miller McDonald’s Progress. As he says “never before have so many lives, human and otherwise, depended on the decisions of human beings in this moment of history”. You may argue the toss on that with me, which is fine, but, in view of the other things you may be scrolling through either side of reading this, how about this for a paragraph putting the whole question of when to change how we do things in context:

We are caught in a difficult trap. If everything that is familiar is torn down and all the structures that govern our day-to-day disintegrated, we risk terrible disorder. We court famines and wars. We invite power vacuums to be filled by even more brutal psychopaths than those who haunt the halls of power now. But if we don’t, if we continue on the current path and simply follow inertia, there is a good chance that the outcome will be far worse than the disruption of upending everything today. Maintaining status-quo trajectories in carbon emissions, habitat destruction and pollution, there is a high likelihood of collapse in the existing structure anyway. It will just occur under far worse ecological conditions than if it were to happen sooner, in a more controlled way. At least, that is what all the best science suggests. To believe otherwise requires rejecting science and knowledge itself, which some find to be a worthwhile trade-off. But reality can only be denied for so long. Dream at night we may, the day will ensnare us anyway.

One thing I never did in one of those rooms full of anxious men was to stand up and loudly denounce the pensions system we were all working within. Actuaries do not behave like that generally. However we have a senior group of actuaries, with the endorsement of their profession, publishing a report that says things like this (bold emphasis added by me):

Planetary Solvency is threatened and a recovery plan is needed: a fundamental, policy-led change of direction, informed by realistic risk assessments that recognise our current market-led approach is failing, accompanied by an action plan that considers broad, radical and effective options.

This is not a normal situation. We should act accordingly.

Source: https://xkcd.com/2415/ licence at: https://creativecommons.org/licenses/by-nc/2.5/

Happy new year all! New year, new banner, courtesy of my brilliant daughter who presented me with a plausible 3-D model of my very primitive cartoon of a reverse-centaur over Christmas. And I thought I would kick off with a relatively uncontentious subject: examinations!

“Back to normal!” That was the cry throughout education when the pandemic had finally ended enough for us to start cramming students into rooms again. The universities had all leveraged themselves to the maximum, and perhaps beyond, to add to the built estate, so as to entice students in both the overseas and the uncapped domestic market to their campuses, and one by-product of this was they had plenty of potential examination halls. So let’s get away from all of that electronic remote nonsense and get everyone in a room together where you can keep an eye on them and stop them cheating. This united the purists who yearned for the days of 10% of the cohort turning up for elite education via chalk and talk rather than the 50% we have today, senior management needing to justify the size of the built estate and politicians who kept referring to traditional exams in an exam hall as the “gold standard”.

So, in a time when students have access to information, tools, how to videos of everything imaginable, the entire output of the greatest minds of thousands of years of human history, as well as many of the less than great minds, in short anything which has ever caught anyone’s attention and been committed to some form of media: in this of all times, we want to sort the students into categories for the existing job market based on how they answer academic questions about what they can remember unaided about the content of their lecture courses and reading lists with a biro on a pad of paper perched precariously on a tiny wooden table surrounded by hundreds of other similar scribblers, for a set period of time as minders wander the floors like Victorian factory owners.

And for institutions that thought the technology we fast-tracked for education delivery and assessment in the pandemic would surely be part of education’s future? Or perhaps they just can’t afford to borrow half a billion or have the the land available to construct more cathedrals of glass and brick to house more examination halls? Simple! We just create the conditions for that gold standard examination right there in the student’s own bedroom or the company they work for!

There are 54 pages to the Institute and Faculty of Actuaries’ (IFoA’s) guidance for remotely invigilated candidates. It covers everything from the minimum specification of equipment you need, including the video camera to watch your every movement and the microphone to pick up every sound you make, to the proprietary spying software (called “Guardian Browser”) you will need to download onto your own computer, how to prove who you are to the system, what you are allowed to have in your bedroom with you and even how you need to sit for the duration of the exam (with a maximum of two 5 minute breaks) to ensure the system has sufficient visibility of you at all times:

These closed book remote arrangements replaced the previous open book online exams which most institutions operated during the pandemic. The reason given was that the exam results shot up so much that widespread cheating was suspected and the integrity of the qualifications was at risk. The IFoA’s latest assessment regulations can be found here.

The belief in examinations is very widespread. A couple of months ago I was discussing the teacher assessments which replaced them briefly during the pandemic with a secondary business studies teacher. He took great pride in the fact that he based his assessments solely on mock results, ie an assessment carried out before all of the syllabus had been covered and when students were unaware it would be the final assessment. But still in his mind more “objective” than any opinion he might have of his own students.

If a large language model can perform enormously better in an examination than your students can without it, what it actually demonstrates is that the traditional examination is woefully unprepared for the future. As Carlo Iacono puts it:

The machines learned from us.

They learned what we actually valued and it turned out to be different from what we said we valued.

We said we valued originality. We rewarded conformity to genre. We said we valued depth. We measured surface features. We said we valued critical thinking. We gave higher marks to confident assertion than to honest uncertainty.

So now the machines produce what the world trained them to produce: fluent, confident, passable output that fits the shapes we reward.

And we’re horrified. Not because they stole something from us. Because they showed us what the systems were selecting for all along.

The scandal isn’t that a model can imitate student writing. The scandal is that we built an educational and professional culture where imitation passes as competence, and then acted shocked when a machine learned to imitate faster.

We trained the incentives. We trained the rubrics. We trained the career ladders.

The pattern recognition which gets you through most formal examinations is just too cheap and easy to automate now. It is no longer a useful skill, even by proxy. It might as well be Hogwarts’ sorting hat for all the use it is in a post scarcity education world. If the machines have worked out how to unlock the elaborate captcha system we have placed around our gold standard assessments, an arms race of security measures protecting a range of tests which look increasingly narrow compared to the capabilities which matter does not seem like the way to go.

What instead we are doing is identifying which students are prepared to put themselves through literally anything to get the qualification. Companies like students like that. They will make ideal reverse-centaurs. The description of life as a reverse-centaur even sounds like the experience of a proctored exam:

Like an Amazon delivery driver, who sits in a cabin surrounded by AI cameras, that monitor the driver’s eyes and take points off if the driver looks in a proscribed direction, and monitors the driver’s mouth because singing isn’t allowed on the job, and rats the driver out to the boss if they don’t make quota.

The driver is in that van because the van can’t drive itself and can’t get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance. But the driver is human, so the van doesn’t just use the driver. The van uses the driver up.

Source: Cory Doctorow, Enshittification

And, even if you are OK with all of that, all of these privacy intrusions don’t even work to prevent cheating! The ACCA, the world’s largest accounting professional body, has just announced it is stopping all remote exams after giving up the arms race against the cheats, facilitated in some cases seemingly by their Big Four employers lying about what had gone on.

Actuarial exams started in 1850, only 2 years after the Institute of Actuaries was established (Dermot Grenham wrote about them recently here). This pre-dated the establishment of the first examination boards by a few years (1856 Society of Arts, the Society for the encouragement of Arts, Manufactures and Commerce, later the Royal Society for the encouragement of Arts, Manufactures and Commerce (Royal Society of Arts); 1857: University of Oxford Delegacy of Local Examinations (founded by the University of Oxford); and 1858: University of Cambridge Local Examinations Syndicate (UCLES, founded by the University of Cambridge)), so keen were actuaries to institute examinations. However it was the massive expansion of the middle classes as the Industrial Revolution disrupted society in so many ways that led to the need for a new sorting hat beyond the capacity of the oral examinations that had previously been the norm.

Now people seem to be lining up to drag everyone back into the examination hall. Any suggestion of a retreat from traditional exams is met by howls of outrage from people like Sam Leith at The Spectator about lack of “rigour”. However, in my view, they are wrong.

Yes of course you can isolate students from every intellectual aid they would normally use, as a centaur, to augment their performance, limit the sources they can access, force them to rely on their own memories entirely, and put them under significant time pressure. You will definitely reduce marks by doing that. So that has made it harder and therefore more rigorous and more objective, right?

Well according to the Merriam-Webster dictionary, rigorous is a synonym of rigid, strict or stringent. However, while all these words mean extremely severe or stern, rigorous implies the imposition of hardship and difficulty. So promoting exams above all as an exercise in rigour reveals their true nature as a kind of punishment beating in written form, for which the prize for undergoing it is whatever it qualifies you for. Suddenly the sorting hat looks relatively less arbitrary.

The problems of traditional exams are well known, but the most important ones in my view are that they measure a limited range of abilities and therefore are unlikely to show what students can really achieve. Harder does not mean more objective. It is like deciding who can act by throwing students out, one at a time, in front of a baying mob of, let’s say for argument, readers of The Spectator. Sure, some of the students might be able to calm the crowd, some may even be able to redirect their anger towards a different target. But are the people who can play Mark Antony for real necessarily the best all-round actors? And has someone who can only stand frozen on the spot under those circumstances really proved that they could never act well?

It also means that education ends a month or more before the exams, to allow the appropriate cramming, followed by engaging all of the teaching staff in the extended exercise of marking, checking and moderating what has been written in answer to academic questions about what the students can remember unaided about the content of their lecture courses and reading lists with a biro on a pad of paper perched precariously on a tiny wooden table surrounded by hundreds of other similar scribblers, for a set period of time as minders wander the floors like Victorian factory owners. But what if instead the assessment was part of the teaching process? What if students felt that their assessment had been a meaningful part of their educational experience? What if, instead of arguing the toss over whether they scored 68% or 70% on an assessment, students could see for themselves whether they had demonstrated mastery of their subject.

One model of assessment which is getting a lot of attention at the moment, one I am a big fan of having used it at the University of Leicester on some modules, is something called interactive oral assessment, where students meet with a lecturer or tutor, individually or in a small group, and answer questions about work they have already submitted. It is a highly demanding form of assessment, for both the students and the assessors, but it means the final assessment is done with the student present and, with careful probing from the assessors, who will obviously need to have done a close reading of the project work beforehand, you can be highly confident of the degree to which the student understands the work they have submitted. It also allows the student to submit a piece of work of more complexity and ambition than can be accommodated by a traditional exam. And it needn’t take any more time if the interviews are carried out online when set against the exam marking time of the traditional exam. Something which all the technology we developed through the pandemic allows us to do, without the need for spyware.

There are other models which also assess the technological centaurs we wish our students to become rather than the reverse-centaurs we are currently dooming too many to become. It is looking like it may be time to start telling students to stop writing and to put down their pens on the traditional exam. And perhaps the actuarial profession, who led us into the era of professional written examinations so enthusiastically 175 years ago, might now want to take the lead in navigating our way out of them?

So this is my 42nd blog post of the year and the 8th where I have referenced Cory Doctorow. Thought it was more to be honest, so influential has he been on my thought, particularly as I have delved deeper into what, how and why the AI Rush is proceeding and what it means for the people exiting universities over the next few years.

Yesterday Cory published a reminder of his book reviews this year. He is an amazing book reviewer. There are 24 on the list this year, and I want to read every one of them on the strength of his reviews alone.

I would like to repay the compliment by reviewing his latest book: Enshittification (the other publication this year – Picks and Shovels – is also well worth your time by the way). Can’t believe this wasn’t the word of the year rather than rage bait, as it explains considerably more about the times we are living in.

I have been a fan of Doctorow for a couple of years now. I had had Walkaway sat on my shelves for a few years before I read it and was immediately enthralled by his tale of a post scarcity future which had still somehow descended into an inter-generational power struggle hellscape. I moved on to the Little Brother books, now being reenacted by Trump with his ICE force in one major US city after another. Followed those up with The Lost Cause, where the teenagers try desperately to bridge the gap across the generations with MAGA people, with tragic results along the way but a grim determination at the end “the surest way to lose is to stop running”. From there I migrated to the Marty Hench thrillers, his non-fiction The Internet Con (which details the argument for interoperability, ie the ability of any platform to interact with another) and his short fiction (I loved Radicalised, not just for the grimly prophetic Radicalised novella in the collection, but also the gleeful insanity of Unauthorised Bread). I highly recommend them all.

I came to Enshittification after reading his Pluralistic blog most days for the last year and a half, so was initially disappointed to find very little new as I started working my way through it. However what the first two parts – The Natural History and The Pathology – are is a patient explanation of the concept of enshittification and how it operates assuming no previous engagement with the term, all in one place.

Enshittifcation, as defined by Cory Doctorow, proceeds as follows:

  1. First, platforms are good to their users.
  2. Then they abuse their users to make things better for their business customers.
  3. Next, they abuse those business customers to claw back all the value for themselves.
  4. Finally, they have become a giant pile of shit.

So far, so familiar. But then I got to Part Three, explaining The Epidemiology of enshittification, and the book took off for me. The erosion of antitrust (what we would call competition) law since Carter. “Antitrust’s Vietnam” (how Robert Bork described the 12 years IBM fought and outspent the US Department of Justice year after year defending their monopolisation case) until Reagan became President. How this led to an opening to develop the operating system for IBM when it entered the personal computer market. How this led to Microsoft, etc. Then how the death of competition also killed Big Tech regulation ( regulating a competitive market which acts against collusion is much easier than regulating one with a small number of big players which absolutely will collude with each other).

And then we get to my favourite chapter of the book “Reverse-Centaurs and Chickenisation”. Any regular reader of this blog will already be familiar with what a reverse centaur is, although Cory has developed a snappy definition in the process of writing this book:

A reverse-centaur is a machine that uses a human to accomplish more than the machine could manage on its own.

And if that isn’t chilling enough for you, the description of the practices of poultry packers and how they control the lives of the nominally self-employed chicken farmers of the US, and how these have now been exported to companies like Amazon and Arise and Uber, should certainly be. The prankster who collected up the bottled piss of the Amazon drivers who weren’t allowed a loo break and resold it on Amazon‘s own platform as “a bitter lemon drink” called Release Energy, which Amazon then recategorised as a beverage without asking for any documentation to prove it was fit to drink and then, when it was so successful it topped their sales chart, rang the prankster up to discuss using Amazon for shipping and fulfillment – this was a rare moment of hilarity in a generally sordid tale of utter exploitation. My favourite bit is when he gets on to the production of his own digital rights management (DRM) free audio versions of his own books.

The central point of the DRM issue is, as Cory puts it, “how perverse DMCA 1201 is”:

If I, as the author, narrator, and investor in an audiobook, allow Amazon to sell you that book and later want to provide you with a tool so you can take your book to a rival platform, I will be committing a felony punishable by a five-year prison sentence and a $500,000 fine.

To put this in perspective: If you were to simply locate this book on a pirate torrent site and download it without paying for it, your penalty under copyright law is substantially less punitive than the penalty I would face for helping you remove the audiobook I made from Amazon’s walled garden. What’s more, if you were to visit a truck stop and shoplift my audiobook on CD from a spinner rack, you would face a significantly lighter penalty for stealing a physical item than I would for providing you with the means to take a copyrighted work that I created and financed out of the Amazon ecosystem. Finally, if you were to hijack the truck that delivers that CD to the truck stop and steal an entire fifty-three-foot trailer full of audiobooks, you would likely face a shorter prison sentence than I would for helping you break the DRM on a title I own.

DMCA1201 is the big break on interoperability. It is the reason, if you have a HP printer, you have to pay $10,000 a gallon for ink or risk committing a criminal offence by “circumventing an access control” (which is the software HP have installed on their printers to stop you using anyone else’s printer cartridges). And the reason for the increasing insistence on computer chips in everything from toasters (see “Unauthorised Bread” for where this could lead) to wheelchairs – so that using them in ways the manufacturer and its shareholders disapprove of becomes illegal.

The one last bastion against enshittification by Big Tech was the tech workers themselves. Then the US tech sector laid off 260,000 workers in 2023 and a further 100,000 in the first half of 2024.

In case you are feeling a little depressed (and hopefully very angry too) at this stage, Part 4 is called The Cure. This details the four forces that can discipline Big Tech and how they can all be revived, namely:

  1. Competition
  2. Regulation
  3. Interoperability
  4. Tech worker power

As Cory concludes the book:

Martin Luther King Jr once said, “It may be true that the law cannot make a man love me, but it can stop him lynching me, and I think that’s pretty important, also.”

And it may be true that the law can’t force corporate sociopaths to conceive of you as a human being entitled to dignity and fair treatment, and not just an ambulatory wallet, a supply of gut bacteria for the immortal colony organism that is a limited liability corporation.

But it can make that exec fear you enough to treat you fairly and afford you dignity, even if he doesn’t think you deserve it.

And I think that’s pretty important.

I was reading Enshittification on the train journey back from Hereford after visiting the Hay Winter Weekend, where I had listened to, amongst others, the oh-I’m-totally-not-working-for-Meta-any-more-but-somehow-haven’t-got-a-single-critical-word-to-say-about-them former Deputy Prime Minister Nick Clegg. While I was on the train, a man across the aisle had taken the decision to conduct a conversation with first Google and then Apple on speaker phone. A particular highlight was him just shouting “no, no, no!” at Google‘s bot trying to give him options. He had already been to the Vodaphone shop that morning and was on his way to an appointment which he couldn’t get at the Apple Store on New Street in Birmingham. He spotted the title of my book and, when I told him what enshittification meant, and how it might make some sense out of the predicament he found himself in, took a photo of the cover.

My feeling is that enshittification goes beyond Big Tech. It is the defining industrial battle of our times. We shouldn’t primarily worry about whether it is coming from the private or the public sector, as enshittification can happen in both places: from hollowing out justice to “paying more for medicines… at the exact moment we can’t afford to pay enough doctors to prescribe them” in the public sector, where we already reside within the Government’s walled garden, to all of the outrages mentioned above and more in the private sector.

The PFI local health hubs set out in last week’s budget take us back to perhaps the ultimate enshittificatory contracts the Government ever entered into, certainly before the pandemic. The Government got locked into 40 year contracts, took all the risk, and all the profit was privatised. The turbo-charging of the original PFI came out of the Blair-Brown government’s mania for keeping capital spending off the balance sheet in defence of Gordon Brown’s “Golden Rule” which has now been replaced by Rachel Reeves’ equally enshittifying fiscal rules. All the profits (or, increasingly, rents, as Doctorow discusses in the chapter on Varoufakis’ concept of Technofeudalism) from turning the offer to shit always seem to end up in the private sector. The battle is against enshittification from both private and, by proxy, via public monopolies.

Enshittification is, ultimately, a positive and empowering book which I strongly recommend you buy, avoiding Amazon if you can. We can have a better internet than this. We can strike a better deal with Big Tech over how we run our lives. But the surest way to lose is to stop running.

And next time a dead-eyed Amazon driver turns up at your door, be nice, they are probably having a worse day than you are.

New (left) and old (right) Naiku shrines during the 60th sengu at Ise Jingu, 1973, via Bock 1974

In his excellent new book, Breakneck, Dan Wang tells the story of the high-speed rail links which started to be constructed in 2008 between San Francisco and Los Angeles and between Beijing and Shanghai respectively. Both routes would be around 800 miles long when finished. The Beijing-Shanghai line opened in 2011 at a cost of $36 billion. To date, California has built only a small stretch of their line, as yet nowhere near either Los Angeles or San Francisco, and the latest estimate of the completed bill is $128 billion. Wang uses this, amongst other examples to draw a distinction between the engineering state of China “building big at breakneck speed” and the lawyerly society of the United States “blocking everything it can, good and bad”.

Europe doesn’t get much of a mention, other than to be described as a “mausoleum”, which sounds rather JD Vance and there is quite a lot about this book that I disagree with strongly, which I will return to. However there is also much to agree with in this book, and none more so than when Wang talks about process knowledge.

Wang tells another story, of Ise Jingu in Japan. Every 20 years exact copies of Naiku, Geku, and 14 other shrines here are built on vacant adjacent sites, after which the old shrines are demolished. Altogether 65 buildings, bridges, fences, and other structures are rebuilt this way. They were first built in 690. In 2033, they will be rebuilt for the 63rd time. The structures are built each time with the original 7th century techniques which involve no nails, just dowels and wood joints. Staff have a 200 year tree planting plan to ensure enough cypress trees are planted to make the surrounding forest self-sufficient. The 20 year intervals between rebuilding are the length of the generations, the older passing on the techniques to the younger.

This, rather like the oral tradition of folk stories and songs, which were passed on by each generation as contemporary narratives until they were all written down and fixed in time so that they quickly appeared old-fashioned thereafter, is an extreme example of process knowledge. What is being preserved is not the Trigger’s Broom of temples at Ise Jingu, but the practical knowledge of how to rebuild them as they were originally built.

Trigger’s Broom. Source: https://www.youtube.com/watch?v=BUl6PooveJE

Process knowledge is the know-how of your experienced workforce that cannot easily be written down. It can develop where such a workforce work closely with researchers and engineers to create feedback loops which can also accelerate innovation. Wang contrasts Shenzhen in China where such a community exists, with Silicon Valley where it doesn’t, forcing the United States to have such technological wonders as the iPhone manufactured in China.

What happens when you don’t have process knowledge? Well one example would be our nuclear industry, where lack of experience of pressurised water reactors has slowed down the development of new power stations and required us to rely considerably on French expertise. There are many other technical skill shortages.

China has recognised the supreme importance of process knowledge as compared to the American concern with intellectual property (IP). IP can of course be bought and sold as a commodity and owned as capital, whereas process knowledge tends to rest within a skilled workforce.

This may then be the path to resilience for the skilled workers of the future in the face of the AI-ification of their professions. Companies are being sold AI systems for many things at the moment, some of which will clearly not work with few enough errors, or without so much “human validation” (a lovely phrase a good friend of mine actively involved in integrating AI systems into his manufacturing processes used recently) that they are not deemed practical. For early career workers entering these fields the demonstration of appropriate process knowledge, or the ability to develop it very quickly, may be the key to surviving the AI roller coaster they face over the next few years. Actionable skills and knowledge which allow them to manage such systems rather than being managed by them. To be a centaur rather than a reverse-centaur.

Not only will such skills make you less likely to lose your job to an AI system, they will also increase your value on the employment market: the harder these skills and knowledge are to acquire, the more valuable they are likely to be. But whereas in the past, in a more static market, merely passing your exams and learning coding might have been enough for an actuarial student for instance, the dynamic situation which sees everything that can be written down disappearing into prompts in some AI system will make such roles unprotected.

Instead it will be the knowledge about how people are likely to respond to what you say in a meeting or write in an email or report, and the skill to strategise around those things, knowing what to do when the rules run out, when situations are genuinely novel, ie putting yourself in someone else’s shoes and being prepared to make judgements. It will be the knowledge about what matters in a body of data, putting the pieces together in meaningful ways, and the skills to make that obvious to your audience. It will be the knowledge about what makes everyone in your team tick and the skills to use that knowledge to motivate them to do their best work. It will ultimately be about maintaining independent thought: the knowledge of why you are where you are and the skill to recognise what you can do for the people around you.

These have not always been seen as entry level skills and knowledge for graduates, but they are increasingly going to need to be as the requirement grows to plug you in further up an organisation if at all as that organisation pursues its diamond strategy or something similar. And alongside all this you will need a continuing professional self-development programme on steroids going on to fully understand the systems you are working with as quickly as possible and then understand them all over again when they get updated, demanding evidence and transparency and maintaining appropriate uncertainty when certainty would be more comfortable for the people around you, so that you can manage these systems into the areas where they can actually add value and out of the areas where they can cause devastation. It will be more challenging than transmitting the knowledge to build a temple out of hay and wood 20 years into the future, and will be continuous. Think of it as the Trigger’s Broom Process of Career Management if you like.

These will be essential roles for our economic future: to save these organisations from both themselves and their very expensive systems. It will be both enthralling and rewarding for those up to the challenge.

Wallace & Gromit: Vengeance Most Fowl models on display in Bristol. This file is licensed under the Creative Commons Attribution-Share Alike 4.0 International license.

I have been watching Daniel Susskind’s lectures on AI and the future of work this week: Automation Anxiety was delivered in September and The Economics of Work and Technology earlier this week. The next in the series, entitled Economics and Artificial Intelligence is scheduled for 13 January. They are all free and I highly recommend them for their great range of source material presented.

In my view the most telling graph, which featured in both lectures, was this one:

Original Source: Daniel Susskind A World Without Work

Susskind extended the usual concept of the ratio between average college and university graduate salaries to those of school leavers to include the equivalent ratio of craftsmen to labourers which then gives us data back to 1220. There are two big collapses in this ratio in the data: that following the Black Death (1346-1353), which may have killed 50% of Europe’s 14th century population, and the Industrial Revolution (which slow singularity started around 1760 and then took us through the horrors of the First World War and the Great Depression before the graph finally picks up post Bretton Woods).

As Susskind shows, the profits from the Industrial Revolution were not going to workers:

Source: The Technology Trap, Carl Benedikt Frey

So how is the AI Rush comparing? Well Susskind shared another graph:

Source: David Autor Work of the Past, Work of the future

This, from 2019, introduced the idea that the picture is now more complex than high-skilled and low-skilled workers, now there is a middle. And, as Autor has set out more recently, the middle is getting squeezed:

Key dynamics at play include:

  • Labor Share Decline: OECD data reveal a 3–5 percentage point drop in labor’s share of income in sectors most exposed to AI, a trend likely to accelerate as automation deepens.
  • Wage Polarization: The labor market is bifurcating. On one end, high-complexity “sense-making” roles; on the other, low-skill service jobs. The middle is squeezed, amplifying both political risk and regulatory scrutiny.
  • Productivity Paradox 2.0: Despite the promise of AI-driven efficiency, productivity gains remain elusive. The real challenge is not layering chatbots atop legacy processes, but re-architecting workflows from the ground up—a costly and complex endeavor.

For enterprise leaders, the implications are profound. AI is best understood not as a job destroyer, but as a “skill-lowering” platform. It enables internal labor arbitrage, shifting work toward judgment-intensive, context-rich tasks while automating the rest. The risk is not just technological—it is deeply human. Skill depreciation now sits alongside cyber and climate risk on the board agenda, demanding rigorous workforce-reskilling strategies and a keen eye on brand equity as a form of social license.

So, even if the overall number of jobs may not be reduced, the case being made is that the average skill level required to carry them out will be. As Susskind said, the Luddites may have been wrong about the spinning jenny replacing jobs, but it did replace and transform tasks and its impact on workers was to reduce their pay, quality of work, status as craftsmen and economic power. This looks like the threat being made by employers once again, with real UK wages already still only at the level they were at in 2008:

However this is where I part company with Susskind’s presentation, which has an implicit inevitability to it. The message is that these are economic forces we can’t fight against. When he discusses whether the substituting force (where AI replaces you) or the complementing force (where AI helps you to be more productive and increases the demand for your work) will be greater, it is almost as if we have no part to play in this. There is some cognitive dissonance when he quotes Blake, Engels, Marx and Ruskin about the horrors of living through such times, but on the whole it is presented as just a natural historical process that the whole of the profits from the massive increases in productivity of the Industrial Revolution should have ended up in the pockets of the fat guys in waistcoats:

Richard Arkwright, Sir Robert Peel, John Wilkinson and Josiah Wedgwood

I was recently at Cragside in Northumberland, where the arms inventor and dealer William Armstrong used the immense amount of money he made from selling big guns (as well as big cranes and the hydraulic mechanism which powers Tower Bridge) to decking out his house and grounds with the five artificial lakes required to power the world’s first hydro-electric lighting system. His 300 staff ran around, like good reverse-centaurs, trying to keep his various inventions from passenger lifts to an automated spit roast from breaking down, so that he could impress his long list of guests and potential clients to Cragside, from the Shah of Persia to the King of Siam and two future Prime Ministers of Japan. He made sure they were kept running around with a series of clock chimes throughout the day:

However, with some poetic irony, the “estate regulator” is what has since brought the entire mechanism crashing to a halt:

Which brings me to Wallace and Gromit. Wallace is the inventor, heedless of the impact of his inventions on those around him and especially on his closest friend Gromit, who he regularly dumps when he becomes inconvenient to his plans. Gromit just tries to keep everything working.

Wallace is a cheese-eating monster who cannot be assessed purely on the basis of his inventions. And neither can Armstrong, Arkwright, Peel, Wilkinson or Wedgwood. We are in the process of allowing a similar domination of our affairs by our new monsters:

Meta CEO Mark Zuckerberg beside Amazon CEO Jeff Bezos and his fiancée (now wife) Lauren, Google CEO Sundar Pichai and Elon Musk at President Trump’s 2nd Inauguration.

Around half an hour into his second lecture, Daniel Susskind started talking about pies. This is the GDP pie (Susskind has also written a recent book on Growth: A Reckoning, which argues that GDP growth can go on forever – my view would be closer to the critique here from Steve Keen) which, as Susskind says, increased by a factor of 113 in the UK between 1700 and 2000. But, as Steve Keen says:

The statistics strongly support Jevons’ perspective that energy—and specifically, energy from coal—caused rising living standards in the UK (see Figure 2). Coal, and not a hypothesised change in culture, propelled the rise in living standards that Susskind attributes to intangible ideas.

Source: https://www.themintmagazine.com/growth-some-inconvenient-truths/

Susskind talks about the productivity effect, he talks about the bigger pie effect and then he talks about the changing pie effect (ie changes to the types of work we do – think of the changes in the CPI basket of goods and services) as ways in which jobs are created by technological change. However he has nothing to say about just giving less of the pie to the monsters. Instead for Susskind the AI Rush is all about clever people throwing 10 times the amount of money at AI as was directed at the Manhattan Project and the heads of OpenAI, Anthropic and Google DeepMind stating that AI will replace humans in all economically useful tasks in 10 years, a claim which he says we should take seriously. Cory Doctorow, amongst others, disagrees. In his latest piece, When AI prophecy fails, he has this to say about why companies have reduced recruitment despite the underperformance of AI systems to date:

All this can feel improbable. Would bosses really fire workers on the promise of eventual AI replacements, leaving themselves with big bills for AI and falling revenues as the absence of those workers is felt?

The answer is a resounding yes. The AI industry has done such a good job of convincing bosses that AI can do their workers’ jobs that each boss for whom AI fails assumes that they’ve done something wrong. This is a familiar dynamic in con-jobs.

The Industrial Revolution had a distribution problem which gave birth to Chartism, Marxism, the Trades Union movement and the Labour Party in the UK alone. And all of that activity only very slowly chipped away at the wealth share of the top 10%:

Source: https://equalitytrust.org.uk/scale-economic-inequality-uk/

However the monsters of the Industrial Revoution did at least have solid proof that they could deliver what they promised. You don’t get more concrete a proof of concept than this after all:

View on the Thames and the opening Tower Bridge, London, from the terraces at Wapping High Street, at sunset in July 2013, Bert Seghers. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.

The AI Rush has a similar distribution problem, but it is also the first industrial revolution since the global finance industry decoupled from the global real economy. So the wealth share of the Top 10% isn’t going back up fast enough? No problem. Just redistribute the money at the top even further up:

What the monsters of the AI Rush lack is anything tangible to support their increasingly ambitious assertions. Wallace may be full of shit. And the rest of us can all just play a Gromit-like support role until we find out one way or the other or concentrate on what builds resilient communities instead.

Whether you think the claims for the potential of AI are exaggerated; or that the giant bet on it that the US stock market has made will end in an enormous depression; or that the energy demands of this developing technology will be its constraining force ultimately; or that we are all just making the world a colder place by prioritising systems, however capable, over people: take your pick as a reason to push back against the AI Rush. But my bet would be on the next 10 years not being dominated by breathless commentary on the exploits of Tech Bros.

The warehouse at the end of Raiders of the Lost Ark

In the year when I was born, Malvina Reynolds recorded a song called Little Boxes when she was a year younger than I am now. If you haven’t heard it before, you can listen to it here. You might want to listen to it while you read the rest of this.

I remember the first time I felt panic during the pandemic. It was a couple of months in, we had been working very hard: to put our teaching processes online, consulting widely about appropriate remote assessments and getting agreement from the Institute and Faculty of Actuaries (IFoA) for our suggested approach at Leicester, checking in with our students, some of who had become very isolated as a result of lockdowns, and a million other things. I was just sitting at my kitchen table and suddenly I felt tears welling up and I was unable to speak without my voice breaking down. It happened at intervals after that, usually during a quiet moment when I, consciously or unconsciously, had a moment to reflect on the enormity of what was going on. I could never point to anything specific that triggered it, but I do know that it has been a permanent change about me, and that my emotions have been very much closer to the surface ever since. I felt something similar again this morning.

What is going on? Well I haven’t been able to answer that satisfactorily until now, but recently I read an article by David Runciman in the LRB from nine years ago when Donald Trump got elected POTUS the first time. I am not sure that everything in the article has withstood the test of time, but in it Runciman makes the case for Trump being the result of the people wanting “Trump to shake up a system that they also expected to shield them from the recklessness of a man like Trump.”. And this part looks prophetic:

[Trump is]…the bluntest of instruments, indiscriminately shaking the foundations with nothing to offer by way of support. Under these conditions, the likeliest response is for the grown-ups in the room to hunker down, waiting for the storm to pass. While they do, politics atrophies and necessary change is put off by the overriding imperative of avoiding systemic collapse. The understandable desire to keep the tanks off the streets and the cashpoints open gets in the way of tackling the long-term threats we face. Fake disruption followed by institutional paralysis, and all the while the real dangers continue to mount. Ultimately, that is how democracy ends.

And it suddenly hit me that this was something I had indeed taken for granted my whole life until the pandemic came along. The only thing that had ever looked like toppling society itself was the prospect of a nuclear war. Otherwise it seemed that our political system was hard to change and impossible to kill.

And then the pandemic came along and we saw government national and local digging mass graves and then filling them in again and setting aside vast arenas for people to die in before quietly closing them again. Rationing of food and other essentials was left to the supermarkets to administer, as were the massive snaking socially-distanced queues around their car parks. Seemingly arbitrary sets of rules suddenly started appearing at intervals about how and when we were allowed to leave the house and what we were allowed to do when out, and also how many people we could have in our houses and where they were allowed to come from. Most businesses were shut and their employees put on the government’s payroll. We learned which of us were key workers and spent a lot of time worrying about how we could protect the NHS, who we clapped every Thursday. It was hard to maintain the illusion that society still provided solid ground under our feet, particularly if we didn’t have jobs which could be moved online. Whoever you were you had to look down at some point, and I think now that I was having my Wile E. Coyote moment.

The trouble is, once you have looked down, it is hard to put that back in a box. At least I thought so, although there seems to have been a lot of putting things in boxes going on over the last few years. The UK Covid-19 Inquiry has made itself available online via a YouTube channel, but you might have thought that a Today at the Inquiry slot on terrestrial TV would have been more appropriate, not just covering it when famous people are attending. What we do know is that Patrick Vallance, Chief Scientific Advisor throughout the pandemic, has said that another pandemic is “absolutely inevitable” and that “we are not ready yet” for such an eventuality. Instead we have been busily shutting that particular box.

The biggest box of course is climate change. We have created a really big box for that called the IPCC. As the climate conferences migrate to ever more unapologetic petro-states, protestors are criminalised and imprisoned and emissions continue to rise, the box for this is doing a lot of work.

And then there are all the NHS boxes. As Roy Lilley notes:

If inquiries worked, we’d have the safest healthcare system in the world. Instead, we have a system addicted to investigating itself and forgetting the answers.

But perhaps the days of the box are numbered. The box Keir Starmer constructed to contain the anger about grooming gangs which the previous 7 year long box had been unable to completely envelop also now appears to be on the edge of collapse. And the Prime Minister himself was the one expressing outrage when a perfectly normal British box, versions of which had been giving authority to policing decisions since at least the Local Government (Review of Decisions) Act 2015 (although the original push to develop such systems stemmed from the Hillsborough and Heysel disasters in 1989 and 1985 respectively) suddenly didn’t make the decision he was obviously expecting. That box now appears to be heading for recycling if Reform UK come to power, which is, of course, rather difficult to do in Birmingham at the moment.

But what is the alternative to the boxes? At the moment it does not look like it involves confronting our problems any more directly. As Runciman reflected on the second Trump inauguration:

Poor Obama had to sit there on Monday and witness the mistaking of absolutism for principle and spectacle for politics. I don’t think Trump mistakes them – he doesn’t care enough to mind what passes for what. But the people in the audience who got up and applauded throughout his speech – as Biden and Harris and the Clintons and the Bushes remained glumly in their seats – have mistaken them. They think they will reap the rewards of what follows. But they will also pay the price.

David Allen Green’s recent post on BlueSky appears to summarise our position relative to that of the United States very well:

In 2017, I was rather excitedly reporting about ideas which were new to me at the time regarding how technology or, as Richard and Daniel Susskind referred to it in The Future of the Professions, “increasingly capable machines” were going to affect professional work. I concluded that piece as follows:

The actuarial profession and the higher education sector therefore need each other. We need to develop actuaries of the future coming into your firms to have:

  • great team working skills
  • highly developed presentation skills, both in writing and in speech
  • strong IT skills
  • clarity about why they are there and the desire to use their skills to solve problems

All within a system which is possible to regulate in a meaningful way. Developing such people for the actuarial profession will need to be a priority in the next few years.

While all of those things are clearly still needed, it is becoming increasingly clear to me now that they will not be enough to secure a job as industry leaders double down.

Source: https://www.ft.com/content/99b6acb7-a079-4f57-a7bd-8317c1fbb728

And perhaps even worse than the threat of not getting a job immediately following graduation is the threat of becoming a reverse-centaur. As Cory Doctorow explains the term:

A centaur is a human being who is assisted by a machine that does some onerous task (like transcribing 40 hours of podcasts). A reverse-centaur is a machine that is assisted by a human being, who is expected to work at the machine’s pace.

We have known about reverse-centaurs since at least Charlie Chaplin’s Modern Times in 1936.

By Charlie Chaplin – YouTube, Public Domain, https://commons.wikimedia.org/w/index.php?curid=68516472

Think Amazon driver or worker in a fulfillment centre, sure, but now also think of highly competitive and well-paid but still ultimately human-in-the-loop kinds of roles being responsible for AI systems designed to produce output where errors are hard to spot and therefore to stop. In the latter role you are the human scapegoat, in the phrasing of Dan Davies, “an accountability sink” or in that of Madeleine Clare Elish, a “moral crumple zone” all rolled into one. This is not where you want to be as an early career professional.

So how to avoid this outcome? Well obviously if you have other options to roles where a reverse-centaur situation is unavoidable you should take them. Questions to ask at interview to identify whether the role is irretrievably reverse-centauresque would be of the following sort:

  1. How big a team would I be working in? (This might not identify a reverse-centaur role on its own: you might be one of a bank of reverse-centaurs all working in parallel and identified “as a team” while in reality having little interaction with each other).
  2. What would a typical day be in the role? This should smoke it out unless the smokescreen they put up obscures it. If you don’t understand the first answer, follow up to get specifics.
  3. Who would I report to? Get to meet them if possible. Establish whether they are technical expert in the field you will be working in. If they aren’t, that means you are!
  4. Speak to someone who has previously held the role if possible. Although bear in mind that, if it is a true reverse-centaur role and their progress to an actual centaur role is contingent on you taking this one, they may not be completely forthcoming about all of the details.

If you have been successful in a highly competitive recruitment process, you may have a little bit of leverage before you sign the contract, so if there are aspects which you think still need clarifying, then that is the time to do so. If you recognise some reverse-centauresque elements from your questioning above, but you think the company may be amenable, then negotiate. Once you are in, you will understand a lot more about the nature of the role of course, but without threatening to leave (which is as damaging to you as an early career professional as it is to them) you may have limited negotiation options at that stage.

In order to do this successfully, self knowledge will be key. It is that point from 2017:

  • clarity about why they are there and the desire to use their skills to solve problems

To that word skills I would now add “capabilities” in the sense used in a wonderful essay on this subject by Carlo Iacono called Teach Judgement, Not Prompts.

You still need the skills. So, for example, if you are going into roles where AI systems are producing code, you need to have sufficiently good coding skills yourself to create a programme to check code written by the AI system. If the AI system is producing communications, your own communication skills need to go beyond producing work that communicates to an audience effectively to the next level where you understand what it is about your own communication that achieves that, what is necessary, what is unnecessary, what gets in the way of effective communication, ie all of the things that the AI system is likely to get wrong. Then you have a template against which to assess the output from an AI system, and for designing better prompts.

However specific skills and tools come and go, so you need to develop something more durable alongside them. Carlo has set out four “capabilities” as follows:

  1. Epistemic rigour, which is being very disciplined about challenging what we actually know in any given situation. You need to be able to spot when AI output is over-confident given the evidence, or when a correlation is presented as causation. What my tutors used to refer to as “hand waving”.
  2. Synthesis is about integrating different perspectives into an overall understanding. Making connections between seemingly unrelated areas is something AI systems are generally less good at than analysis.
  3. Judgement is knowing what to do in a new situation, beyond obvious precedent. You get to develop judgement by making decisions under uncertainty, receiving feedback, and refining your internal models.
  4. Cognitive sovereignty is all about maintaining your independence of thought when considering AI-generated content. Knowing when to accept AI outputs and when not to.

All of these capabilities can be developed with reflective practice, getting feedback and refining your approach. As Carlo says:

These capabilities don’t just help someone work with AI. They make someone worth augmenting in the first place.

In other words, if you can demonstrate these capabilities, companies who themselves are dealing with huge uncertainty about how much value they are getting from their AI systems and what they can safely be used for will find you an attractive and reassuring hire. Then you will be the centaur, using the increasingly capable systems to improve your own and their productivity while remaining in overall control of the process, rather than a reverse-centaur for which none of that is true.

One sure sign that you are straying into reverse-centaur territory is when a disproportionate amount of your time is spent on pattern recognition (eg basing an email/piece of coding/valuation report on an earlier email/piece of coding/valuation report dealing with a similar problem). That approach was always predicated on being able to interact with a more experienced human who understood what was involved in the task at some peer review stage. But it falls apart when there is no human to discuss the earlier piece of work with, because the human no longer works there, or a human didn’t produce the earlier piece of work. The fake it until you make it approach is not going to work in environments like these where you are more likely to fake it until you break it. And pattern recognition is something an AI system will always be able to do much better and faster than you.

Instead, question everything using the capabilities you have developed. If you are going to be put into potentially compromising situations in terms of the responsibilities you are implicitly taking on, the decisions needing to be made and the limitations of the available knowledge and assumptions on which those decisions will need to be based, then this needs to be made explicit, to yourself and the people you are working with. Clarity will help the company which is trying to use these new tools in a responsible way as much as it helps you. Learning is going to be happening for them as much as it is for you here in this new landscape.

And if the company doesn’t want to have these discussions or allow you to hamper the “efficiency” of their processes by trying to regulate them effectively? Then you should leave as soon as you possibly can professionally and certainly before you become their moral crumple zone. No job is worth the loss of your professional reputation at the start of your career – these are the risks companies used to protect their senior people of the future from, and companies that are not doing this are clearly not thinking about the future at all. Which is likely to mean that they won’t have one.

To return to Cory Doctorow:

Science fiction’s superpower isn’t thinking up new technologies – it’s thinking up new social arrangements for technology. What the gadget does is nowhere near as important as who the gadget does it for and who it does it to.

You are going to have to be the generation who works these things out first for these new AI tools. And you will be reshaping the industrial landscape for future generations by doing so.

And the job of the university and further education sectors will increasingly be to equip you with both the skills and the capabilities to manage this process, whatever your course title.

Trump mentions in BBC News US & Canada top feed around 4.30pm today. Out of 12 stories, 8 mention Trump by name in the headline https://www.bbc.co.uk/news/world/us_and_canada

You will have all seen the work mug staple: “The Difficult We Do Immediately. The Impossible Takes a Little Longer”. The original quotation in the title, originally attributed to Charles Alexandre de Calonne, the Finance Minister for Louis XVI, in response to a request for money from his Queen, Marie Antoinette, appeared in a collection from 1794, this was a year after Louis and Marie Antoinette (but not Charles, who survived another nine years) died on the guillotine and five since George Washington had been inaugurated as the first President of the United States. It seems as if the seemingly impossible may need to be attempted once again.

So let’s start by expanding on the problem which I brought up in my last post. The problem goes much wider than Donald Trump. He is assembling a court of loyalists around him, in the style of a mob boss, which as has been observed by others, has been the prelude to fascism in the past. As Jason Stanley, Professor of Philosophy at Yale and author of Erasing History: how fascists rewrite the past to control the future, puts it: “the United States is your enemy”. There is also considerable circumstantial evidence to suggest that Trump is considered an agent of influence by Putin’s regime in Russia.

The difficulty of what I am about to suggest is also the reason why it is so urgent: our relationship with the United States (the one we keep needing reassurance by successive US Presidents of its special nature) is positively symbiotic. George Monbiot lists some of our vulnerabilities here:

  1. Through the “Five Eyes” partnership, the UK automatically shares signals intelligence, human intelligence and defence intelligence with the US government. The two governments, with other western nations, run a wide range of joint intelligence programmes, such as Prism, Echelon, Tempora and XKeyscore. The US National Security Agency (NSA) uses the UK agency GCHQ as a subcontractor.
  2. Depending on whose definitions you accept, the US has either 11 or 13 military bases and listening stations in the UK. They include RAF Lakenheath in Suffolk, from which it deploys F-35 jets; RAF Menwith Hill in North Yorkshire, which carries out military espionage and operational support for the NSA in the US; RAF Croughton, part-operated by the CIA, which allegedly used the base to spy on Angela Merkel among many others; and RAF Fylingdales, part of the US Space Surveillance Network. If the US now sides with Russia against the UK and Europe, these could just as well be Russian bases and listening stations.
  3. Then we come to our weapon systems… among the crucial components of our defence are F-35 stealth jets, designed and patented in the US.
  4. Many of our weapons systems might be dependent on US CPUs and other digital technologies, or on US systems such as Starlink, owned by Musk, or GPS, owned by the US Space Force. Which of our weapons systems could achieve battle-readiness without US involvement and consent? Which could be remotely disabled by the US military?
  5. Then there is our independent nuclear deterrent, which is “neither British nor independent” according to Professor Norman Dombey, Emeritus Professor of Physics and Astronomy at the University of Sussex.

Then there is the sheer cost of rearming with Europe to the extent necessary in the absence of the United States’ support, suggesting 3.5% rather than 2.5% of GDP is what will be required, suggesting the UK Government, with its WCAIWCDI approach described here, will need to find something in addition to the foreign aid budget to ransack. I will be talking more about defence spending in a future post.

It is small wonder that some commentators, such as Arthur Snell, former Assistant Director for Counter-Terrorism at the Foreign and Commonwealth Office, conclude that disentangling ourselves from the United States may be impossible. And that is just considering defence and security considerations.

On the economy the symbiosis is just as evident. First of all there is the sizeable proportion of our imports and exports of both goods and services which are with the United States. Only in June 2023, we were trying hard to develop these further with something called the Atlantic Declaration. Although, as a recent speech by Megan Greene of the Bank of England’s Monetary Policy Committee shows, our trade with the US as a proportion has remained remarkably stable since 2000 at least.

Source: ONS and Bank calculations. Trade weights for each trading partner are calculated as the sum of bilateral exports and imports as a share of total UK trade. Data is annual and in current prices. EU refers to the EU27. Latest data point is 2023

Culturally, the United States is embedded in our laptops and mobile phones, our television programmes and movies, and our social media. Its concerns have permeated our language and our politics. A reasonable proportion of our political and financial elite have been to their universities and theirs to ours. Many of our employers have US parents: just in the actuarial world, two of the three biggest consultancies (Aon and Willis Towers Watson) are described as British-American firms, with the other one (Mercer) headquartered in New York. It has Apple. It has Amazon. It has Google. It has Meta and, of course, X.

And perhaps the greatest entanglement of our two countries is political, to the extent that we routinely send our politicians to each other countries to support election campaigns and our media breathlessly report every in and out of the US Presidential elections. We are lucky if a French or German one is mentioned more than a couple of weeks before it takes place. Whether it is the language thing (we are still VERY resistant to learning other languages) or the post imperial thing (feeling like we have a special understanding of the problems the United States face as a self-appointed global police force) or the degree of financialisation of our economy or for some other reason, it is very hard to avoid a sense of being conjoined with the United States of America.

But it is precisely because our relationship is so close in so many important areas that we are particularly vulnerable to US pressure – the harder it will be to disentangle ourselves, the more urgent it is that we do.

As David Allen Green puts it this week, the US is currently undergoing a diplomatic revolution. Originally applied to France’s realignment of all of its alliances away from Prussia and towards Austria, which ultimately led to the work mug motto at the start of this piece, the US appears to be realigning itself towards Russia and away from the UK and the EU. As Green goes on to say:

Other countries would now be prudent to regulate their affairs so as to minimise or eliminate their dependency on the United States – it is no longer a question of waiting out until the next United States elections.

And other political systems would be wise to limit what can be done within their own constitutions by executive order, and to strengthen the roles of the legislature and the judiciary (and also of internal independent legal advice within government).

The last seems key to me. We cannot, particularly now we are outside the EU, afford for our main ally to be capable of being so capricious. This applies whether the US are allowed to and do elect a President in 2028 who is respectful of its institutions and constitution. We always felt Americans were very respectful of their constitution because they never stopped talking about it, but it turns out to have been a thin veneer with little meaning. Much like our discussion of sovereignty in the UK.

The first thing we need to do is to stop obsessing about what John Mulaney memorably referred to as a “horse in a hospital” in 2019. Despite the fact that was five years ago and we have now seen a horse in the hospital before, many have been turned off news coverage altogether by the anxiety caused as a result of the constant media narration of what Trump and Musk have done next each day. The dangers of treating the Trump and Musk chaos as a TV show are potentially existential in the US but grave for us in the UK too.

While we may have deep sympathy for the people in the US and other countries caught up in the chaos, our priority has to be to get our own house in order. Otherwise we won’t be any help to anyone.

My priorities would be the ones I set out in October 2022, only now with much greater urgency.

  1. We can’t have parties with only 20% of the popular vote (34% of a 60% turnout) having an absolute majority of 174 seats. We need proportional representation, so that every vote counts equally and perhaps we might get somewhere near the turnout of Germany’s last election of 82.5%.
  2. Reform media ownership and promote plurality in support of a more democratic and accountable media system. The Media Reform Coalition has produced a manifesto for a people’s media which I support: it includes proposals for an Independent Media Commons – with participatory newsrooms, community radio stations, digital innovators and cultural producers, supported by democratically-controlled public resources to tell the stories of all the UK’s communities. As we know, our social media is controlled by Meta (with Facebook, WhatsApp and Instagram), all of which have more than 2 billion active users and Google with YouTube, also with more than 2 billion active users. X still has over half a billion, despite what Musk has done with it. In newspapers, 90% of daily circulation is controlled by three firms: News UK, Daily Mail Group and Reach plc (which has most of the local titles you’ve ever heard of, including the Birmingham Mail and Birmingham Live, as well as The Daily Express and the Daily Star).
  3. Reform election finance. Recommendations for doing this were provided in the July 2021 report by the Committee on Standards in Public Life. There was an eye-watering amount of money spent in the US Presidential Election this time: The Democrats spent $1.8 billion and the Republicans $1.4 billion, with $2.6 billion and $1.7 billion respectively being spent by the two parties on the Senate and House races. In the UK, paradoxically, the relatively small amount of money donated to parties mean that they are potentially more vulnerable to well organised lobbying operations. This is why the offer of $100 million by Musk to Reform led for calls to restrict foreign political donations to profits generated within the UK.

This way we would be more resilient to the many ways that the current chaotic United States establishment can reach into our own politics and governance, and start to develop policies with broad support which can reduce our dependency on the United States.