The rear view mirror isn’t going to help us any more Source: Wikimedia Commons: Shattered right-hand side mirror on a 5-series BMW in Durham, North Carolina by Ildar Sagdejev

I would like to start this week’s post with a quote from Carlo Iacono, from a Substack piece he did a couple of weeks ago called The Questions Nobody Is Funding:

What is a human being for? What do we owe the future? What remains worth the difficulty of learning?

These are not questions you will find in the OECD’s AI Literacy Framework. They are not addressed in the World Economic Forum’s Education 4.0 agenda. They do not appear in the competency matrices cascading through national education systems. Instead, we get learning objectives and assessment criteria. Employability outcomes and digital capabilities. The language of preparation, as if the future were already decided and our job were simply to ready people for it.

I think this articulates well the central challenge of AI for education. Whether you think this is the beginning of a future where augmented humans move into a different type of existence to any we have known before; or you believe very little will be left behind in the rubble from the inevitable burst of the AI bubble when it comes and will be, at least temporarily, forgotten in the most devastating stock market crash and depression for a century; or you hold both these beliefs at the same time; or you are somewhere in between, it is difficult to see how the orderly world of competency matrices, learning objectives, assessment criteria, employability outcomes and digital capabilities can easily survive the period of technological, cultural, economic and political disruption which we appear to have entered. Looking in the rear view mirror and trying to extrapolate what you see into the future is not going to work for us any more.

Whether you think, like Cory Doctorow, in his recent speech at the University of Washington called The Reverse Centaur’s Guide to Criticizing AI, that:

AI is the asbestos in the walls of our technological society, stuffed there with wild abandon by a finance sector and tech monopolists run amok. We will be excavating it for a generation or more.

Or you think, as Henry Farrell has suggested in another article called Large Language Models As The Tales That Are Sung:

Technologies such as LLMs are neither going to transcend humanity as the holdouts on one side still hope, nor disappear, as other holdouts might like. We’re going to have to figure out ways to talk about them better and more clearly.

We are certainly going to have to figure out ways to talk about LLMs and other forms of AI more clearly, so that the decisions we need to make about how to accommodate them into society can be made with the maximum level of participation and consensus. And this seems to be the key for me with respect to education too. We do need people graduating from our education system understanding clearly what LLMs can and cannot do, which is a tricky path to navigate at the moment as a lot of money is being concentrated on persuading you that it can do pretty much anything. One example here has created a writers’ room of four LLMs where they are asked to critique each other by pushing the output from one into the prompts for the others, reminiscent of The Human Centipede. Which immediatel reminded me of this take from later in that Cory Doctorow speech:

And I’ll never forget when one writer turned to me and said, “You know, you prompt an LLM exactly the same way an exec gives shitty notes to a writers’ room. You know: ‘Make me ET, except it’s about a dog, and put a love interest in there, and a car chase in the second act.’ The difference is, you say that to a writers’ room and they all make fun of you and call you a fucking idiot suit. But you say it to an LLM and it will cheerfully shit out a terrible script that conforms exactly to that spec (you know, Air Bud).”

So, back to Carlo’s little questions:

What is a human being for?

A lofty question certainly, and not one I am going to tackle in a blog post. But perhaps I can say a bit about what a human being is not for. This is the key to Henry Farrell’s piece which is his take on the humanist critique of AI. We are presumably primarily designing the future for humans. All humans. Not just Tech Bros. And the design needs to bear that in mind. For example, a human being is not, in my opinion, for this (from the Cory Doctorow link):

Like an Amazon delivery driver, who sits in a cabin surrounded by AI cameras, that monitor the driver’s eyes and take points off if the driver looks in a proscribed direction, and monitors the driver’s mouth because singing isn’t allowed on the job, and rats the driver out to the boss if they don’t make quota.

The driver is in that van because the van can’t drive itself and can’t get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance. But the driver is human, so the van doesn’t just use the driver. The van uses the driver up.

The first task of the education establishment, I think, is to attempt to protect the graduate from becoming the reverse-centaur described above, whether a deliver driver, a coder (where additionally the human-in-the-loop becomes the accountability sink for everything the AI gets wrong) or a radiologist. This will often be resisted by the employers you are currently very sensitive to the needs of as educators (many of who are senior enough to get to use the new technologies as a centaur rather than be used by them as a reverse-centaur, tend to struggle to put themselves in anyone else’s shoes and, frankly, can’t see what all the fuss is about) but, remember, the cosy world of employability outcomes is over. The employers are not sticking to the implicit agreement to employ your graduates if you delivered the outcomes and therefore neither should you. Your responsibility in education is to the students, not their potential future employers, now their interests no longer appear to be aligned.

What do we owe the future?

This depends on what you mean by “the future” of course. If it is some technological dystopia of diminished opportunities for most (even for making friends as seemingly envisioned by some of the top Tech Bros), then nothing at all. But if it is the future which is going to support your children and their children, you obviously owe it a lot. But what do you owe it? What is owed is often converted into money by the political right, and used to justify not running up public debt in the present so as not to “impoverish” future generations. What that approach generally achieves is to impoverish both the current and future generations.

But if you think of owing resources, institutions and infrastructure to the next generation, then that is a responsibility that we should take seriously. And part of that is to produce an educated generation with tools, systems, institutions and infrastructure. The education institutions must take steps to make sure they survive in a relevant way, embedded in systems which support individuals and proselytising the value of education for all. They must ensure that their graduates understand and have facility with the essential tools they will need, and have developed the ability to learn new skills as they need them, and realise when that is. This is about developing individuals who leave no longer dependent on the institutions, able to work things out for themselves rather than requiring never-ending education inside an institution.

What remains worth the difficulty of learning?

The skills already mentioned will be the core ones for everyone, and these will need to be hammered out in terms everyone can understand. But in the world of post scarcity education, which is here but which we have not yet fully embraced, the rest will be up to us. A large part of the education of the future will need to be about equipping us all to understand what we now have access to and when and how to access it. We will all have different things we are interested in, or end up involved with and needing to be educated about. It will be up to each of us to decide which things are worth the difficulty of learning, but to make those decisions we will need education that can support the development of judgement.

For education institutions, the question will be what is not worth the difficulty of learning? Credentialising based on now relatively meaningless assessment methods will not cut it. This is where the confrontation with employers and politicians is likely to come. Essential skills and their related knowledge will be better developed and assessed via more open-ended project work and online assessment of it to check understanding. These will need to become the norm, with written examinations becoming less and less prevalent. Not because of fear of cheating and plagiarism, but because an outcome which can be replicated that easily by AI is not worth assessing in the first place.

As William Gibson apparently said at some point in 1992:

“The future has arrived — it’s just not evenly distributed yet.”

The future of education will be the distribution problem.

So this is my 42nd blog post of the year and the 8th where I have referenced Cory Doctorow. Thought it was more to be honest, so influential has he been on my thought, particularly as I have delved deeper into what, how and why the AI Rush is proceeding and what it means for the people exiting universities over the next few years.

Yesterday Cory published a reminder of his book reviews this year. He is an amazing book reviewer. There are 24 on the list this year, and I want to read every one of them on the strength of his reviews alone.

I would like to repay the compliment by reviewing his latest book: Enshittification (the other publication this year – Picks and Shovels – is also well worth your time by the way). Can’t believe this wasn’t the word of the year rather than rage bait, as it explains considerably more about the times we are living in.

I have been a fan of Doctorow for a couple of years now. I had had Walkaway sat on my shelves for a few years before I read it and was immediately enthralled by his tale of a post scarcity future which had still somehow descended into an inter-generational power struggle hellscape. I moved on to the Little Brother books, now being reenacted by Trump with his ICE force in one major US city after another. Followed those up with The Lost Cause, where the teenagers try desperately to bridge the gap across the generations with MAGA people, with tragic results along the way but a grim determination at the end “the surest way to lose is to stop running”. From there I migrated to the Marty Hench thrillers, his non-fiction The Internet Con (which details the argument for interoperability, ie the ability of any platform to interact with another) and his short fiction (I loved Radicalised, not just for the grimly prophetic Radicalised novella in the collection, but also the gleeful insanity of Unauthorised Bread). I highly recommend them all.

I came to Enshittification after reading his Pluralistic blog most days for the last year and a half, so was initially disappointed to find very little new as I started working my way through it. However what the first two parts – The Natural History and The Pathology – are is a patient explanation of the concept of enshittification and how it operates assuming no previous engagement with the term, all in one place.

Enshittifcation, as defined by Cory Doctorow, proceeds as follows:

  1. First, platforms are good to their users.
  2. Then they abuse their users to make things better for their business customers.
  3. Next, they abuse those business customers to claw back all the value for themselves.
  4. Finally, they have become a giant pile of shit.

So far, so familiar. But then I got to Part Three, explaining The Epidemiology of enshittification, and the book took off for me. The erosion of antitrust (what we would call competition) law since Carter. “Antitrust’s Vietnam” (how Robert Bork described the 12 years IBM fought and outspent the US Department of Justice year after year defending their monopolisation case) until Reagan became President. How this led to an opening to develop the operating system for IBM when it entered the personal computer market. How this led to Microsoft, etc. Then how the death of competition also killed Big Tech regulation ( regulating a competitive market which acts against collusion is much easier than regulating one with a small number of big players which absolutely will collude with each other).

And then we get to my favourite chapter of the book “Reverse-Centaurs and Chickenisation”. Any regular reader of this blog will already be familiar with what a reverse centaur is, although Cory has developed a snappy definition in the process of writing this book:

A reverse-centaur is a machine that uses a human to accomplish more than the machine could manage on its own.

And if that isn’t chilling enough for you, the description of the practices of poultry packers and how they control the lives of the nominally self-employed chicken farmers of the US, and how these have now been exported to companies like Amazon and Arise and Uber, should certainly be. The prankster who collected up the bottled piss of the Amazon drivers who weren’t allowed a loo break and resold it on Amazon‘s own platform as “a bitter lemon drink” called Release Energy, which Amazon then recategorised as a beverage without asking for any documentation to prove it was fit to drink and then, when it was so successful it topped their sales chart, rang the prankster up to discuss using Amazon for shipping and fulfillment – this was a rare moment of hilarity in a generally sordid tale of utter exploitation. My favourite bit is when he gets on to the production of his own digital rights management (DRM) free audio versions of his own books.

The central point of the DRM issue is, as Cory puts it, “how perverse DMCA 1201 is”:

If I, as the author, narrator, and investor in an audiobook, allow Amazon to sell you that book and later want to provide you with a tool so you can take your book to a rival platform, I will be committing a felony punishable by a five-year prison sentence and a $500,000 fine.

To put this in perspective: If you were to simply locate this book on a pirate torrent site and download it without paying for it, your penalty under copyright law is substantially less punitive than the penalty I would face for helping you remove the audiobook I made from Amazon’s walled garden. What’s more, if you were to visit a truck stop and shoplift my audiobook on CD from a spinner rack, you would face a significantly lighter penalty for stealing a physical item than I would for providing you with the means to take a copyrighted work that I created and financed out of the Amazon ecosystem. Finally, if you were to hijack the truck that delivers that CD to the truck stop and steal an entire fifty-three-foot trailer full of audiobooks, you would likely face a shorter prison sentence than I would for helping you break the DRM on a title I own.

DMCA1201 is the big break on interoperability. It is the reason, if you have a HP printer, you have to pay $10,000 a gallon for ink or risk committing a criminal offence by “circumventing an access control” (which is the software HP have installed on their printers to stop you using anyone else’s printer cartridges). And the reason for the increasing insistence on computer chips in everything from toasters (see “Unauthorised Bread” for where this could lead) to wheelchairs – so that using them in ways the manufacturer and its shareholders disapprove of becomes illegal.

The one last bastion against enshittification by Big Tech was the tech workers themselves. Then the US tech sector laid off 260,000 workers in 2023 and a further 100,000 in the first half of 2024.

In case you are feeling a little depressed (and hopefully very angry too) at this stage, Part 4 is called The Cure. This details the four forces that can discipline Big Tech and how they can all be revived, namely:

  1. Competition
  2. Regulation
  3. Interoperability
  4. Tech worker power

As Cory concludes the book:

Martin Luther King Jr once said, “It may be true that the law cannot make a man love me, but it can stop him lynching me, and I think that’s pretty important, also.”

And it may be true that the law can’t force corporate sociopaths to conceive of you as a human being entitled to dignity and fair treatment, and not just an ambulatory wallet, a supply of gut bacteria for the immortal colony organism that is a limited liability corporation.

But it can make that exec fear you enough to treat you fairly and afford you dignity, even if he doesn’t think you deserve it.

And I think that’s pretty important.

I was reading Enshittification on the train journey back from Hereford after visiting the Hay Winter Weekend, where I had listened to, amongst others, the oh-I’m-totally-not-working-for-Meta-any-more-but-somehow-haven’t-got-a-single-critical-word-to-say-about-them former Deputy Prime Minister Nick Clegg. While I was on the train, a man across the aisle had taken the decision to conduct a conversation with first Google and then Apple on speaker phone. A particular highlight was him just shouting “no, no, no!” at Google‘s bot trying to give him options. He had already been to the Vodaphone shop that morning and was on his way to an appointment which he couldn’t get at the Apple Store on New Street in Birmingham. He spotted the title of my book and, when I told him what enshittification meant, and how it might make some sense out of the predicament he found himself in, took a photo of the cover.

My feeling is that enshittification goes beyond Big Tech. It is the defining industrial battle of our times. We shouldn’t primarily worry about whether it is coming from the private or the public sector, as enshittification can happen in both places: from hollowing out justice to “paying more for medicines… at the exact moment we can’t afford to pay enough doctors to prescribe them” in the public sector, where we already reside within the Government’s walled garden, to all of the outrages mentioned above and more in the private sector.

The PFI local health hubs set out in last week’s budget take us back to perhaps the ultimate enshittificatory contracts the Government ever entered into, certainly before the pandemic. The Government got locked into 40 year contracts, took all the risk, and all the profit was privatised. The turbo-charging of the original PFI came out of the Blair-Brown government’s mania for keeping capital spending off the balance sheet in defence of Gordon Brown’s “Golden Rule” which has now been replaced by Rachel Reeves’ equally enshittifying fiscal rules. All the profits (or, increasingly, rents, as Doctorow discusses in the chapter on Varoufakis’ concept of Technofeudalism) from turning the offer to shit always seem to end up in the private sector. The battle is against enshittification from both private and, by proxy, via public monopolies.

Enshittification is, ultimately, a positive and empowering book which I strongly recommend you buy, avoiding Amazon if you can. We can have a better internet than this. We can strike a better deal with Big Tech over how we run our lives. But the surest way to lose is to stop running.

And next time a dead-eyed Amazon driver turns up at your door, be nice, they are probably having a worse day than you are.

On 20 November, the UK Covid-19 Inquiry published its second report and recommendations following its investigation into ‘Core decision-making and political governance’. The following day these were the headlines:

This contrasts with the Inquiry’s first report and recommendations following its investigation into the UK’s ‘Resilience and preparedness (Module 1)’ on Thursday 18 July 2024. Then the following day’s headlines looked like this:

Whereas the first report had recommended a radical simplification of the civil emergency preparedness and resilience systems, including:

  • A new approach to risk assessment;
  • A new UK-wide approach to the development of strategy, which learns lessons from the past;
  • Better systems of data collection and sharing in advance of future pandemics;
  • Holding a UK-wide pandemic response exercise at least every three years and publishing the outcome; and
  • The creation of a single, independent statutory body responsible for whole system preparedness and response.

The second report on the other hand merely reran the pandemic, pointing out where we went wrong on:

  • The emergence of Covid-19;
  • The first UK-wide lockdown;
  • Exiting the first lockdown;
  • The second wave; and
  • The vaccination rollout and Delta and Omicron variants.

And crucially who to blame for it. Its recommendations were far less specific and actionable in my view than those from the first report. And yet it got all the headlines, with glowering images of Baroness Hallett and pictures of Boris Johnson with head bowed.

The first report dealt with what we could do better next time and was virtually ignored (only The Daily Mirror and The Independent carried “They failed us all” headlines about the Covid Inquiry first report). The second dealt with who to blame and it dominated the headlines. I think this neatly encapsulates what is wrong with us as a country and why we never seem to be able to learn from our own past mistakes or the examples of other countries.

This is not about defending Boris Johnson or any of his ministers. It is about realising that they are much less important than our own ability to sort out our problems and study any evidence we can to help us do that.

The NHS suffers from the same problem, as Roy Lilley has described here, too many inquiries and most of their recommendations ignored. Again and again and again. We choose to focus on the minor and irrelevant at the expenses of the major and important. Again and again and again. As Lilley says:

Until we make it OK for people to say… I made a mistake… we will forever be trapped in a Kafka world of inquiries coming to the same conclusions…

…If inquiries worked, we’d have the safest healthcare system in the world. 

Instead, we have a system addicted to investigating itself and forgetting the answers.

It is part of a pattern repeated yesterday, focusing on the micro when our problems are macro. Rachel Reeves increased taxes by £26 billion in yesterday’s budget, which was much less than the £40 billion in her first budget, and yet still led to the BBC reporting “Reeves chooses to tax big and spend big” and the FT leading with “Rachel Reeves’ Budget raises UK tax take to all-time high“, and with this graph:

This is hilariously at odds with the message of what it was reporting last week:

The latter was obviously an attempt to head off a wealth tax, which appears to have been largely successful. Our averageness when it comes to tax, though, is supported by this graph using OECD data from Tax Policy Associates:

Our position in the middle of the pack will be little affected by what happened yesterday. And that and all the chatter about the OBR leaking it all an hour in advance rather drowned out the fact that there was relatively little additional spending (around £12 billion overall, a quarter of which was on the welcome removal of the two-child limit). The main point was to increase our “fiscal headroom” to £22 billion, ie the amount the Government can spend before they breach their own fiscal rules.

It looks like we are going to do what we are going to do, with fiscal headroom management masquerading as economic policy, and otherwise just sit around waiting for the next disaster. Which we will then have a big inquiry about to tell us that we weren’t remotely prepared for it. Which we will then ignore…and so it continues. Again and again and again.

A couple of weeks ago I wanted to find an article I had written about heat pumps to check something. So I Googled weknow0 and heat pump. This did give me the article, from December 2022, I was after, but also an “AI overview” that I hadn’t requested. The above is what it told me.

Now this is inaccurate on a number of counts. Firstly, I have published 226 articles over the more than 12 years I have been writing on weknow0.co.uk and I have only mentioned heat pumps in two of these. These articles did focus on the points mentioned in 3 of the 4 bullet points above and in one of them I also set out how the market at the time (December 2022) was stacked against anyone acquiring a heat pump, a state of affairs which has thankfully improved considerably since. However to claim that my blog “provides a consumer-focused perspective in the practicalities and challenges of domestic heat pump adoption in the UK” is clearly hilarious.

In fact anyone seeing that would assume I talked about little other than heat pumps, so I decided to do a search on something else that I talk about infrequently and see what I got (I searched “weknow0 science fiction”):

This seems a considerably better summary of the recent activity on the blog, which is also unrecognisable as the blog summarised in response to the previous search.

Right at the end, it suggests a reason for the title of the blog which isn’t an unreasonable guess from a regular reader. But guess it still is, and it does not appear to have processed the significant number of blog posts with variants of we know zero in the title to fine tune its take.

So someone using the AI overview as a research tool would get a completely different view of what the blog was about depending upon which other word they used alongside weknow0. Perhaps that doesn’t matter too much to anyone other than me in this case, but it is part of a broader issue. It is not summarising the website it is suggesting it is summarising.

Of course many of you will now be shouting at me that I need to give the system more focused prompts. There is now a whole area of expertise, lectured in and written about at considerable length, called “prompt engineering”. There are senior professionals who have rarely given their juniors the time of day for years, giving the tersest responses to their completely reasonable queries about the barely intelligible instructions they have given for a piece of work, suddenly prepared to spend hours and hours on prompt engineering so that the Metal Mickey in their phone or laptop can give them responses closer to what they were actually looking for.

At this point, perhaps we should perhaps hear from Sundar Pichai, the Google CEO:

https://www.bbc.co.uk/iplayer/episode/m002mgk1/the-interview-decisionmakers-sundar-pichai-running-the-google-empire

As part of Faisal Islam’s slightly gushing interview with Pichai, we learn that the AI overview on Google is “prone to errors” and needs to be used alongside such things as Google search. “Use them for what they are good at but don’t blindly trust them” he says of his tools which he admits to currently investing $90 billion a year in. This is of course a problem, as one of the reasons people are reluctantly resorting to the AI overview is because the basic Google search has become so enshittified.

And that kind of echoes what Cory Doctorow has said about Google. Google need to maintain a narrative about growth. You will have picked this up if you watched the Pichai interview above, from the breathless stuff about “one of the most powerful men in the world” “perhaps being one of the easier things for AI to replicate one day” to:

You don’t want to constrain an economy based on energy. That will have consequences.

To the even more breathless stuff about us being 5 years from quantum computing being where generative AI is now.

The reason for all the growth talk, according to Doctorow, is that Google needs to be growing for it to be able to maintain a price earnings ratio of 20 to 1, rather than the more typical 4 to 1 of a mature business. So it’s all about the share price. As Doctorow says:

Which is why Google is so desperately sweaty to maintain the narrative about its growth. That’s a difficult narrative to maintain, though. Google has 90% Search market-share, and nothing short of raising a billion humans to maturity and training them to be Google users (AKA “Google Classroom”) will produce any growth in its Search market-share. Google is so desperate to juice its search revenue that it actually made search worse on purpose so that you would have to run multiple searches (and see multiple rounds of ads) before you got the information you were seeking.

Investors have metabolized the story that AI will be a gigantic growth area, and so all the tech giants are in a battle to prove to investors that they will dominate AI as they dominated their own niches. You aren’t the target for AI, investors are: if they can be convinced that Google’s 90% Search market share will soon be joined by a 90% AI market share, they will continue to treat this decidedly tired and run-down company like a prize racehorse at the starting-gate.

This is why you are so often tricked into using AI, by accidentally grazing a part of your screen with a fingertip, summoning up a pestersome chatbot that requires six taps and ten seconds to banish: companies like Google have made their product teams’ bonuses contingent on getting normies to “use” AI and “use” is defined as “interact with AI for at least ten seconds.” Goodhart’s Law (“any metric becomes a target”) has turned every product you use into a trap for the unwary.

So here we are. AI isn’t meant for most of you, its results are “prone to errors” and need to be used alongside other corroborating material or “human validation”. It needs you to take a course in prompt engineering even if you never did the same to manage any of your human staff. It is primarily designed to persuade investors to keep the share price up to the levels the Board of Alphabet Inc have become accustomed to.

In my last post I referred to Dan Wang’s excellent new book, Breakneck, which I have now read at (for me) breakneck speed, finishing it in a week. It has made me realise how very little I knew about China.

Wang makes the point that China today is reminiscent of the US of a century ago. However he also makes the point that parts of the US were terrible to live in then: from racist segregation and lack of representation, to massive industrial pollution and insensitive planning decisions. As he says of the US:

The public soured on the idea of broad deference to US technocrats and engineers: urban planners (who were uprooting whole neighborhoods), defense officials (who were prosecuting the war in Vietnam), and industry regulators (who were cozying up to companies).

China meanwhile has a Politburo stuffed with engineers and is capable of making snap decisions without much regard to what people want. There is a sense of precarity about life there, with people treated as aggregates rather than as individuals. The country can take off in different directions very quickly and often does – there is a telling passage about the totally different life experiences of someone born in 1959 compared to someone born in 1949 (the worst year to be born in China according to Wang) – and even the elites can be dealt with brutally if they fall out of line with the current direction of travel. But they have created some impressive infrastructure, something which has become problematic for the US. Only around 10% of its GDP goes towards social spending, compared to 20% in the US and 30% amongst some European states, so there is no effective safety net. Think of the US portrayed in (as Christmas is fast approaching) “It’s a Wonderful Life” – a life that is hard to the point of brutality with destitution only one mistake away. And there is a level of social control alien to the west, controlling where people can live and work and very repressive of ethnoreligious minorities. And yet there is a feeling of progress and forward momentum which appears to be popular with most people in China.

As Wang notes at the end of his introduction:

“Breakneck” is the story of the Chinese state that yanked its people into modernity – an action rightfully envied by much of the world – using means that ran roughshod over many – an approach rightfully disdained by much of the world. It is also a reminder that the United States once knew the virtues of speed and ambitious construction.

The chapter on the one child policy, which ran for 35 years, is particularly chilling (China announced its first population fall in 2023 and its population is projected to halve to 700 million by 2100), and now the pressure is on women to have more children again. There is also a chapter on how China dealt with Covid – Wang experienced this first hand from Shanghai for 3 years – which made me understand perhaps why we wasted so much money in the UK on Track and Trace. You would need to be an engineering state to see it through successfully, and China ended up taking it too far in the end.

The economics of China is really interesting. As Wang notes:

China’s overbuilding has produced deep social, financial and environmental costs. The United States has no need to emulate it uncritically. But the Chinese experience does offer political lessons for America. China has shown that financial constraints are less binding than they are cracked up to be. As John Maynard Keynes said, “Anything we can actually do we can afford.” For an infrastructure-starved place like the United States, construction can generate long-run gains from higher economic activity that eventually surpass the immediate construction costs. And the experience of building big in underserved places is a means of redistribution that makes locals happy while satisfying fiscal conservatives who are normally skeptical of welfare payments.

This goes just as much for the UK, where pretty much everywhere outside London is infrastructure-starved (and, as Nicholas Shaxson and John Christensen show here in their written evidence to a UK Parliamentary Committee, even where infrastructure is built outside London, the financing of it sucks money away from the area where the infrastructure is being built and towards finance centres, predominantly in London), but there is also strong resistance from all the main parties to significant redistribution via the benefit system. This results in inequalities which even the FT feels moved to comment on and a map of multiple deprivation in England which looks like this:

The good news is that it doesn’t have to be this way in the UK, there are prominent examples of countries operating in a different way, eg China. The bad news is that China is not doing it because of economics. They are doing it because the state was set up to build big from the beginning. It is in its nature. The lesson of China is that it will keep doing the same things whatever the situation (eg trying to fix the population fall caused by an engineering solution with another engineering solution). Sometimes the world economy will reward their approach and sometimes it will punish it, but that will not be the primary driver for how they behave. I think this may be true of the US, the EU states and the UK too.

Daniel Kahneman showed us in Thinking Fast and Slow, how most of our mental space is used to rationalise decisions we have already taken. One of the places where I part company with Wang is in his reverence for economists. He believes that the US should listen more to both engineers and economists to challenge the lawyerly society.

In the foreword for The Principles of Economics Course from 1990 by Phillip Saunders and William Walstad, Paul Samuelson, the first person from the US to win the Nobel Memorial Prize in Economic Sciences in 1970, wrote:

“Poets are the unacknowledged legislators of the World.” It was a poet who said that, exercising occupational license. Some sage, it may have been I, declared in similar vein: “I don’t care who writes a nation’s laws—or crafts its advanced treaties—if I can write its economic textbooks.” The first lick is the privileged one, impinging on the beginner’s tabula rasa at its most impressionable state.

My view would be that the economists are already in charge.

As a result, my fear is that economics is now used for rationalising decisions we have already made in many countries now, including our own. We are going to do what we are going to do. The economics is just the fig leaf we use to rationalise what may otherwise appear unfair, cruel, divisive and hope-denying policies. The financial constraints are less than they are cracked up to be, but they are a convenient fiction for a government which lacks any guiding principles for spending and investment otherwise and therefore fears that everyone would just be asking for more resources in its absence, and they would have no way of deciding between them.

New (left) and old (right) Naiku shrines during the 60th sengu at Ise Jingu, 1973, via Bock 1974

In his excellent new book, Breakneck, Dan Wang tells the story of the high-speed rail links which started to be constructed in 2008 between San Francisco and Los Angeles and between Beijing and Shanghai respectively. Both routes would be around 800 miles long when finished. The Beijing-Shanghai line opened in 2011 at a cost of $36 billion. To date, California has built only a small stretch of their line, as yet nowhere near either Los Angeles or San Francisco, and the latest estimate of the completed bill is $128 billion. Wang uses this, amongst other examples to draw a distinction between the engineering state of China “building big at breakneck speed” and the lawyerly society of the United States “blocking everything it can, good and bad”.

Europe doesn’t get much of a mention, other than to be described as a “mausoleum”, which sounds rather JD Vance and there is quite a lot about this book that I disagree with strongly, which I will return to. However there is also much to agree with in this book, and none more so than when Wang talks about process knowledge.

Wang tells another story, of Ise Jingu in Japan. Every 20 years exact copies of Naiku, Geku, and 14 other shrines here are built on vacant adjacent sites, after which the old shrines are demolished. Altogether 65 buildings, bridges, fences, and other structures are rebuilt this way. They were first built in 690. In 2033, they will be rebuilt for the 63rd time. The structures are built each time with the original 7th century techniques which involve no nails, just dowels and wood joints. Staff have a 200 year tree planting plan to ensure enough cypress trees are planted to make the surrounding forest self-sufficient. The 20 year intervals between rebuilding are the length of the generations, the older passing on the techniques to the younger.

This, rather like the oral tradition of folk stories and songs, which were passed on by each generation as contemporary narratives until they were all written down and fixed in time so that they quickly appeared old-fashioned thereafter, is an extreme example of process knowledge. What is being preserved is not the Trigger’s Broom of temples at Ise Jingu, but the practical knowledge of how to rebuild them as they were originally built.

Trigger’s Broom. Source: https://www.youtube.com/watch?v=BUl6PooveJE

Process knowledge is the know-how of your experienced workforce that cannot easily be written down. It can develop where such a workforce work closely with researchers and engineers to create feedback loops which can also accelerate innovation. Wang contrasts Shenzhen in China where such a community exists, with Silicon Valley where it doesn’t, forcing the United States to have such technological wonders as the iPhone manufactured in China.

What happens when you don’t have process knowledge? Well one example would be our nuclear industry, where lack of experience of pressurised water reactors has slowed down the development of new power stations and required us to rely considerably on French expertise. There are many other technical skill shortages.

China has recognised the supreme importance of process knowledge as compared to the American concern with intellectual property (IP). IP can of course be bought and sold as a commodity and owned as capital, whereas process knowledge tends to rest within a skilled workforce.

This may then be the path to resilience for the skilled workers of the future in the face of the AI-ification of their professions. Companies are being sold AI systems for many things at the moment, some of which will clearly not work with few enough errors, or without so much “human validation” (a lovely phrase a good friend of mine actively involved in integrating AI systems into his manufacturing processes used recently) that they are not deemed practical. For early career workers entering these fields the demonstration of appropriate process knowledge, or the ability to develop it very quickly, may be the key to surviving the AI roller coaster they face over the next few years. Actionable skills and knowledge which allow them to manage such systems rather than being managed by them. To be a centaur rather than a reverse-centaur.

Not only will such skills make you less likely to lose your job to an AI system, they will also increase your value on the employment market: the harder these skills and knowledge are to acquire, the more valuable they are likely to be. But whereas in the past, in a more static market, merely passing your exams and learning coding might have been enough for an actuarial student for instance, the dynamic situation which sees everything that can be written down disappearing into prompts in some AI system will make such roles unprotected.

Instead it will be the knowledge about how people are likely to respond to what you say in a meeting or write in an email or report, and the skill to strategise around those things, knowing what to do when the rules run out, when situations are genuinely novel, ie putting yourself in someone else’s shoes and being prepared to make judgements. It will be the knowledge about what matters in a body of data, putting the pieces together in meaningful ways, and the skills to make that obvious to your audience. It will be the knowledge about what makes everyone in your team tick and the skills to use that knowledge to motivate them to do their best work. It will ultimately be about maintaining independent thought: the knowledge of why you are where you are and the skill to recognise what you can do for the people around you.

These have not always been seen as entry level skills and knowledge for graduates, but they are increasingly going to need to be as the requirement grows to plug you in further up an organisation if at all as that organisation pursues its diamond strategy or something similar. And alongside all this you will need a continuing professional self-development programme on steroids going on to fully understand the systems you are working with as quickly as possible and then understand them all over again when they get updated, demanding evidence and transparency and maintaining appropriate uncertainty when certainty would be more comfortable for the people around you, so that you can manage these systems into the areas where they can actually add value and out of the areas where they can cause devastation. It will be more challenging than transmitting the knowledge to build a temple out of hay and wood 20 years into the future, and will be continuous. Think of it as the Trigger’s Broom Process of Career Management if you like.

These will be essential roles for our economic future: to save these organisations from both themselves and their very expensive systems. It will be both enthralling and rewarding for those up to the challenge.

Wallace & Gromit: Vengeance Most Fowl models on display in Bristol. This file is licensed under the Creative Commons Attribution-Share Alike 4.0 International license.

I have been watching Daniel Susskind’s lectures on AI and the future of work this week: Automation Anxiety was delivered in September and The Economics of Work and Technology earlier this week. The next in the series, entitled Economics and Artificial Intelligence is scheduled for 13 January. They are all free and I highly recommend them for their great range of source material presented.

In my view the most telling graph, which featured in both lectures, was this one:

Original Source: Daniel Susskind A World Without Work

Susskind extended the usual concept of the ratio between average college and university graduate salaries to those of school leavers to include the equivalent ratio of craftsmen to labourers which then gives us data back to 1220. There are two big collapses in this ratio in the data: that following the Black Death (1346-1353), which may have killed 50% of Europe’s 14th century population, and the Industrial Revolution (which slow singularity started around 1760 and then took us through the horrors of the First World War and the Great Depression before the graph finally picks up post Bretton Woods).

As Susskind shows, the profits from the Industrial Revolution were not going to workers:

Source: The Technology Trap, Carl Benedikt Frey

So how is the AI Rush comparing? Well Susskind shared another graph:

Source: David Autor Work of the Past, Work of the future

This, from 2019, introduced the idea that the picture is now more complex than high-skilled and low-skilled workers, now there is a middle. And, as Autor has set out more recently, the middle is getting squeezed:

Key dynamics at play include:

  • Labor Share Decline: OECD data reveal a 3–5 percentage point drop in labor’s share of income in sectors most exposed to AI, a trend likely to accelerate as automation deepens.
  • Wage Polarization: The labor market is bifurcating. On one end, high-complexity “sense-making” roles; on the other, low-skill service jobs. The middle is squeezed, amplifying both political risk and regulatory scrutiny.
  • Productivity Paradox 2.0: Despite the promise of AI-driven efficiency, productivity gains remain elusive. The real challenge is not layering chatbots atop legacy processes, but re-architecting workflows from the ground up—a costly and complex endeavor.

For enterprise leaders, the implications are profound. AI is best understood not as a job destroyer, but as a “skill-lowering” platform. It enables internal labor arbitrage, shifting work toward judgment-intensive, context-rich tasks while automating the rest. The risk is not just technological—it is deeply human. Skill depreciation now sits alongside cyber and climate risk on the board agenda, demanding rigorous workforce-reskilling strategies and a keen eye on brand equity as a form of social license.

So, even if the overall number of jobs may not be reduced, the case being made is that the average skill level required to carry them out will be. As Susskind said, the Luddites may have been wrong about the spinning jenny replacing jobs, but it did replace and transform tasks and its impact on workers was to reduce their pay, quality of work, status as craftsmen and economic power. This looks like the threat being made by employers once again, with real UK wages already still only at the level they were at in 2008:

However this is where I part company with Susskind’s presentation, which has an implicit inevitability to it. The message is that these are economic forces we can’t fight against. When he discusses whether the substituting force (where AI replaces you) or the complementing force (where AI helps you to be more productive and increases the demand for your work) will be greater, it is almost as if we have no part to play in this. There is some cognitive dissonance when he quotes Blake, Engels, Marx and Ruskin about the horrors of living through such times, but on the whole it is presented as just a natural historical process that the whole of the profits from the massive increases in productivity of the Industrial Revolution should have ended up in the pockets of the fat guys in waistcoats:

Richard Arkwright, Sir Robert Peel, John Wilkinson and Josiah Wedgwood

I was recently at Cragside in Northumberland, where the arms inventor and dealer William Armstrong used the immense amount of money he made from selling big guns (as well as big cranes and the hydraulic mechanism which powers Tower Bridge) to decking out his house and grounds with the five artificial lakes required to power the world’s first hydro-electric lighting system. His 300 staff ran around, like good reverse-centaurs, trying to keep his various inventions from passenger lifts to an automated spit roast from breaking down, so that he could impress his long list of guests and potential clients to Cragside, from the Shah of Persia to the King of Siam and two future Prime Ministers of Japan. He made sure they were kept running around with a series of clock chimes throughout the day:

However, with some poetic irony, the “estate regulator” is what has since brought the entire mechanism crashing to a halt:

Which brings me to Wallace and Gromit. Wallace is the inventor, heedless of the impact of his inventions on those around him and especially on his closest friend Gromit, who he regularly dumps when he becomes inconvenient to his plans. Gromit just tries to keep everything working.

Wallace is a cheese-eating monster who cannot be assessed purely on the basis of his inventions. And neither can Armstrong, Arkwright, Peel, Wilkinson or Wedgwood. We are in the process of allowing a similar domination of our affairs by our new monsters:

Meta CEO Mark Zuckerberg beside Amazon CEO Jeff Bezos and his fiancée (now wife) Lauren, Google CEO Sundar Pichai and Elon Musk at President Trump’s 2nd Inauguration.

Around half an hour into his second lecture, Daniel Susskind started talking about pies. This is the GDP pie (Susskind has also written a recent book on Growth: A Reckoning, which argues that GDP growth can go on forever – my view would be closer to the critique here from Steve Keen) which, as Susskind says, increased by a factor of 113 in the UK between 1700 and 2000. But, as Steve Keen says:

The statistics strongly support Jevons’ perspective that energy—and specifically, energy from coal—caused rising living standards in the UK (see Figure 2). Coal, and not a hypothesised change in culture, propelled the rise in living standards that Susskind attributes to intangible ideas.

Source: https://www.themintmagazine.com/growth-some-inconvenient-truths/

Susskind talks about the productivity effect, he talks about the bigger pie effect and then he talks about the changing pie effect (ie changes to the types of work we do – think of the changes in the CPI basket of goods and services) as ways in which jobs are created by technological change. However he has nothing to say about just giving less of the pie to the monsters. Instead for Susskind the AI Rush is all about clever people throwing 10 times the amount of money at AI as was directed at the Manhattan Project and the heads of OpenAI, Anthropic and Google DeepMind stating that AI will replace humans in all economically useful tasks in 10 years, a claim which he says we should take seriously. Cory Doctorow, amongst others, disagrees. In his latest piece, When AI prophecy fails, he has this to say about why companies have reduced recruitment despite the underperformance of AI systems to date:

All this can feel improbable. Would bosses really fire workers on the promise of eventual AI replacements, leaving themselves with big bills for AI and falling revenues as the absence of those workers is felt?

The answer is a resounding yes. The AI industry has done such a good job of convincing bosses that AI can do their workers’ jobs that each boss for whom AI fails assumes that they’ve done something wrong. This is a familiar dynamic in con-jobs.

The Industrial Revolution had a distribution problem which gave birth to Chartism, Marxism, the Trades Union movement and the Labour Party in the UK alone. And all of that activity only very slowly chipped away at the wealth share of the top 10%:

Source: https://equalitytrust.org.uk/scale-economic-inequality-uk/

However the monsters of the Industrial Revoution did at least have solid proof that they could deliver what they promised. You don’t get more concrete a proof of concept than this after all:

View on the Thames and the opening Tower Bridge, London, from the terraces at Wapping High Street, at sunset in July 2013, Bert Seghers. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.

The AI Rush has a similar distribution problem, but it is also the first industrial revolution since the global finance industry decoupled from the global real economy. So the wealth share of the Top 10% isn’t going back up fast enough? No problem. Just redistribute the money at the top even further up:

What the monsters of the AI Rush lack is anything tangible to support their increasingly ambitious assertions. Wallace may be full of shit. And the rest of us can all just play a Gromit-like support role until we find out one way or the other or concentrate on what builds resilient communities instead.

Whether you think the claims for the potential of AI are exaggerated; or that the giant bet on it that the US stock market has made will end in an enormous depression; or that the energy demands of this developing technology will be its constraining force ultimately; or that we are all just making the world a colder place by prioritising systems, however capable, over people: take your pick as a reason to push back against the AI Rush. But my bet would be on the next 10 years not being dominated by breathless commentary on the exploits of Tech Bros.

The warehouse at the end of Raiders of the Lost Ark

In the year when I was born, Malvina Reynolds recorded a song called Little Boxes when she was a year younger than I am now. If you haven’t heard it before, you can listen to it here. You might want to listen to it while you read the rest of this.

I remember the first time I felt panic during the pandemic. It was a couple of months in, we had been working very hard: to put our teaching processes online, consulting widely about appropriate remote assessments and getting agreement from the Institute and Faculty of Actuaries (IFoA) for our suggested approach at Leicester, checking in with our students, some of who had become very isolated as a result of lockdowns, and a million other things. I was just sitting at my kitchen table and suddenly I felt tears welling up and I was unable to speak without my voice breaking down. It happened at intervals after that, usually during a quiet moment when I, consciously or unconsciously, had a moment to reflect on the enormity of what was going on. I could never point to anything specific that triggered it, but I do know that it has been a permanent change about me, and that my emotions have been very much closer to the surface ever since. I felt something similar again this morning.

What is going on? Well I haven’t been able to answer that satisfactorily until now, but recently I read an article by David Runciman in the LRB from nine years ago when Donald Trump got elected POTUS the first time. I am not sure that everything in the article has withstood the test of time, but in it Runciman makes the case for Trump being the result of the people wanting “Trump to shake up a system that they also expected to shield them from the recklessness of a man like Trump.”. And this part looks prophetic:

[Trump is]…the bluntest of instruments, indiscriminately shaking the foundations with nothing to offer by way of support. Under these conditions, the likeliest response is for the grown-ups in the room to hunker down, waiting for the storm to pass. While they do, politics atrophies and necessary change is put off by the overriding imperative of avoiding systemic collapse. The understandable desire to keep the tanks off the streets and the cashpoints open gets in the way of tackling the long-term threats we face. Fake disruption followed by institutional paralysis, and all the while the real dangers continue to mount. Ultimately, that is how democracy ends.

And it suddenly hit me that this was something I had indeed taken for granted my whole life until the pandemic came along. The only thing that had ever looked like toppling society itself was the prospect of a nuclear war. Otherwise it seemed that our political system was hard to change and impossible to kill.

And then the pandemic came along and we saw government national and local digging mass graves and then filling them in again and setting aside vast arenas for people to die in before quietly closing them again. Rationing of food and other essentials was left to the supermarkets to administer, as were the massive snaking socially-distanced queues around their car parks. Seemingly arbitrary sets of rules suddenly started appearing at intervals about how and when we were allowed to leave the house and what we were allowed to do when out, and also how many people we could have in our houses and where they were allowed to come from. Most businesses were shut and their employees put on the government’s payroll. We learned which of us were key workers and spent a lot of time worrying about how we could protect the NHS, who we clapped every Thursday. It was hard to maintain the illusion that society still provided solid ground under our feet, particularly if we didn’t have jobs which could be moved online. Whoever you were you had to look down at some point, and I think now that I was having my Wile E. Coyote moment.

The trouble is, once you have looked down, it is hard to put that back in a box. At least I thought so, although there seems to have been a lot of putting things in boxes going on over the last few years. The UK Covid-19 Inquiry has made itself available online via a YouTube channel, but you might have thought that a Today at the Inquiry slot on terrestrial TV would have been more appropriate, not just covering it when famous people are attending. What we do know is that Patrick Vallance, Chief Scientific Advisor throughout the pandemic, has said that another pandemic is “absolutely inevitable” and that “we are not ready yet” for such an eventuality. Instead we have been busily shutting that particular box.

The biggest box of course is climate change. We have created a really big box for that called the IPCC. As the climate conferences migrate to ever more unapologetic petro-states, protestors are criminalised and imprisoned and emissions continue to rise, the box for this is doing a lot of work.

And then there are all the NHS boxes. As Roy Lilley notes:

If inquiries worked, we’d have the safest healthcare system in the world. Instead, we have a system addicted to investigating itself and forgetting the answers.

But perhaps the days of the box are numbered. The box Keir Starmer constructed to contain the anger about grooming gangs which the previous 7 year long box had been unable to completely envelop also now appears to be on the edge of collapse. And the Prime Minister himself was the one expressing outrage when a perfectly normal British box, versions of which had been giving authority to policing decisions since at least the Local Government (Review of Decisions) Act 2015 (although the original push to develop such systems stemmed from the Hillsborough and Heysel disasters in 1989 and 1985 respectively) suddenly didn’t make the decision he was obviously expecting. That box now appears to be heading for recycling if Reform UK come to power, which is, of course, rather difficult to do in Birmingham at the moment.

But what is the alternative to the boxes? At the moment it does not look like it involves confronting our problems any more directly. As Runciman reflected on the second Trump inauguration:

Poor Obama had to sit there on Monday and witness the mistaking of absolutism for principle and spectacle for politics. I don’t think Trump mistakes them – he doesn’t care enough to mind what passes for what. But the people in the audience who got up and applauded throughout his speech – as Biden and Harris and the Clintons and the Bushes remained glumly in their seats – have mistaken them. They think they will reap the rewards of what follows. But they will also pay the price.

David Allen Green’s recent post on BlueSky appears to summarise our position relative to that of the United States very well:

To Generation Z: a message of support from a Boomer

So you’ve worked your way through school and now university, developing the skills you were told would always be in high demand, credentialising yourself as a protection against the vagaries of the global economy. You may have serious doubts about ever being able to afford a house of your own, particularly if your area of work is very concentrated in London…

…and you resent the additional tax that your generation pays to support higher education:

Source: https://taxpolicy.org.uk/2023/09/24/70percent/

But you still had belief in being able to operate successfully within the graduate market.

A rational functional graduate job market should be assessing your skills and competencies against the desired attributes of those currently performing the role and making selections accordingly. That is a system both the companies and graduates can plan for.

It is very different from a Rush. The first phenomenon known as a Rush was the Californian Gold Rush of 1848-55. However the capitalist phenomenon of transforming an area to facilitate intensive production probably dates from sugar production in Madeira in the 15th century. There have been many since, but all neatly described by this Punch cartoon from 1849:

A Rush is a big deal. The Californian Gold Rush resulted in the creation of California, now the 5th largest economy in the world. But when it comes to employment, a Rush is not like an orderly jobs market. As Carlo Iacono describes, in an excellent article on the characteristics of the current AI Rush:

The railway mania of the 1840s bankrupted thousands of investors and destroyed hundreds of companies. It also left Britain with a national rail network that powered a century of industrial dominance. The fibre-optic boom of the late 1990s wiped out about $5 trillion in market value across the broader dot-com crash. It also wired the world for the internet age.

A Rush is a difficult and unpredictable place to build a career, with a lot riding on dumb luck as much as any personal characteristics you might have. There is very little you can count on in a Rush. This one is even less predictable because as Carlo also points out:

When the railway bubble burst in the 1840s, the steel tracks remained. When the fibre-optic bubble burst in 2001, the “dark fibre” buried in the ground was still there, ready to carry traffic for decades. These crashes were painful, but they left behind durable infrastructure that society could repurpose.

Whereas the 40–60% of US real GDP growth in the first half of 2025 explained by investment in AI infrastructure isn’t like that:

The core assets are GPUs with short economic half-lives: in practice, they’re depreciated over ~3–5 years, and architectures are turning over faster (Hopper to Blackwell in roughly two years). Data centres filled with current-generation chips aren’t valuable, salvageable infrastructure when the bubble bursts. They’re warehouses full of rapidly depreciating silicon.

So today’s graduates are certainly going to need resilience, but that’s just what their future employers are requiring of them. They also need to build their own support structures which are going to see them through the massive disruption which is coming whether or not the enormous bet on AI is successful or not. The battle to be centaurs, rather than reverse-centaurs, as I set out in my last post (or as Carlo Iacono describes beautifully in his discussion of the legacy of the Luddites here), requires these alliances. To stop thinking of yourselves as being in competition with each other and start thinking of yourselves as being in competition for resources with my generation.

I remember when I first realised my generation (late Boomer, just before Generation X) was now making the weather. I had just sat a 304 Pensions and Other Benefits actuarial exam in London (now SP4 – unsuccessfully as it turned out), and nipped in to a matinee of Sam Mendes’ American Beauty and watched the plastic bag scene. I was 37 at the time.

My feeling is that despite our increasingly strident efforts to do so, our generation is now deservedly losing power and is trying to hang on by making reverse centaurs of your generation as a last ditch attempt to remain in control. It is like the scene in another movie, Triangle of Sadness, where the elite are swept onto a desert island and expect the servant who is the only one with survival skills in such an environment to carry on being their servant.

Don’t fall for it. My advice to young professionals is pretty much the same as it was to actuarial students last year on the launch of chartered actuary status:

If you are planning to join a profession to make a positive difference in the world, and that is in my view the best reason to do so, then you are going to have to shake a few things up along the way.

Perhaps there is a type of business you think the world is crying out for but it doesn’t know it yet because it doesn’t exist. Start one.

Perhaps there is an obvious skill set to run alongside your professional one which most of your fellow professionals haven’t realised would turbo-charge the effectiveness of both. Acquire it.

Perhaps your company has a client who noone has taken the time to put themselves in their shoes and communicate in a way they will properly understand and value. Be that person.

Or perhaps there are existing businesses who are struggling to manage their way in changing markets and need someone who can make sense of the data which is telling them this. Be that person.

All why remaining grounded in which ever community you have chosen for yourself. Be the member of your organisation or community who makes it better by being there.

None of these are reverse centaur positions. Don’t settle for anything less. This is your time.

In 2017, I was rather excitedly reporting about ideas which were new to me at the time regarding how technology or, as Richard and Daniel Susskind referred to it in The Future of the Professions, “increasingly capable machines” were going to affect professional work. I concluded that piece as follows:

The actuarial profession and the higher education sector therefore need each other. We need to develop actuaries of the future coming into your firms to have:

  • great team working skills
  • highly developed presentation skills, both in writing and in speech
  • strong IT skills
  • clarity about why they are there and the desire to use their skills to solve problems

All within a system which is possible to regulate in a meaningful way. Developing such people for the actuarial profession will need to be a priority in the next few years.

While all of those things are clearly still needed, it is becoming increasingly clear to me now that they will not be enough to secure a job as industry leaders double down.

Source: https://www.ft.com/content/99b6acb7-a079-4f57-a7bd-8317c1fbb728

And perhaps even worse than the threat of not getting a job immediately following graduation is the threat of becoming a reverse-centaur. As Cory Doctorow explains the term:

A centaur is a human being who is assisted by a machine that does some onerous task (like transcribing 40 hours of podcasts). A reverse-centaur is a machine that is assisted by a human being, who is expected to work at the machine’s pace.

We have known about reverse-centaurs since at least Charlie Chaplin’s Modern Times in 1936.

By Charlie Chaplin – YouTube, Public Domain, https://commons.wikimedia.org/w/index.php?curid=68516472

Think Amazon driver or worker in a fulfillment centre, sure, but now also think of highly competitive and well-paid but still ultimately human-in-the-loop kinds of roles being responsible for AI systems designed to produce output where errors are hard to spot and therefore to stop. In the latter role you are the human scapegoat, in the phrasing of Dan Davies, “an accountability sink” or in that of Madeleine Clare Elish, a “moral crumple zone” all rolled into one. This is not where you want to be as an early career professional.

So how to avoid this outcome? Well obviously if you have other options to roles where a reverse-centaur situation is unavoidable you should take them. Questions to ask at interview to identify whether the role is irretrievably reverse-centauresque would be of the following sort:

  1. How big a team would I be working in? (This might not identify a reverse-centaur role on its own: you might be one of a bank of reverse-centaurs all working in parallel and identified “as a team” while in reality having little interaction with each other).
  2. What would a typical day be in the role? This should smoke it out unless the smokescreen they put up obscures it. If you don’t understand the first answer, follow up to get specifics.
  3. Who would I report to? Get to meet them if possible. Establish whether they are technical expert in the field you will be working in. If they aren’t, that means you are!
  4. Speak to someone who has previously held the role if possible. Although bear in mind that, if it is a true reverse-centaur role and their progress to an actual centaur role is contingent on you taking this one, they may not be completely forthcoming about all of the details.

If you have been successful in a highly competitive recruitment process, you may have a little bit of leverage before you sign the contract, so if there are aspects which you think still need clarifying, then that is the time to do so. If you recognise some reverse-centauresque elements from your questioning above, but you think the company may be amenable, then negotiate. Once you are in, you will understand a lot more about the nature of the role of course, but without threatening to leave (which is as damaging to you as an early career professional as it is to them) you may have limited negotiation options at that stage.

In order to do this successfully, self knowledge will be key. It is that point from 2017:

  • clarity about why they are there and the desire to use their skills to solve problems

To that word skills I would now add “capabilities” in the sense used in a wonderful essay on this subject by Carlo Iacono called Teach Judgement, Not Prompts.

You still need the skills. So, for example, if you are going into roles where AI systems are producing code, you need to have sufficiently good coding skills yourself to create a programme to check code written by the AI system. If the AI system is producing communications, your own communication skills need to go beyond producing work that communicates to an audience effectively to the next level where you understand what it is about your own communication that achieves that, what is necessary, what is unnecessary, what gets in the way of effective communication, ie all of the things that the AI system is likely to get wrong. Then you have a template against which to assess the output from an AI system, and for designing better prompts.

However specific skills and tools come and go, so you need to develop something more durable alongside them. Carlo has set out four “capabilities” as follows:

  1. Epistemic rigour, which is being very disciplined about challenging what we actually know in any given situation. You need to be able to spot when AI output is over-confident given the evidence, or when a correlation is presented as causation. What my tutors used to refer to as “hand waving”.
  2. Synthesis is about integrating different perspectives into an overall understanding. Making connections between seemingly unrelated areas is something AI systems are generally less good at than analysis.
  3. Judgement is knowing what to do in a new situation, beyond obvious precedent. You get to develop judgement by making decisions under uncertainty, receiving feedback, and refining your internal models.
  4. Cognitive sovereignty is all about maintaining your independence of thought when considering AI-generated content. Knowing when to accept AI outputs and when not to.

All of these capabilities can be developed with reflective practice, getting feedback and refining your approach. As Carlo says:

These capabilities don’t just help someone work with AI. They make someone worth augmenting in the first place.

In other words, if you can demonstrate these capabilities, companies who themselves are dealing with huge uncertainty about how much value they are getting from their AI systems and what they can safely be used for will find you an attractive and reassuring hire. Then you will be the centaur, using the increasingly capable systems to improve your own and their productivity while remaining in overall control of the process, rather than a reverse-centaur for which none of that is true.

One sure sign that you are straying into reverse-centaur territory is when a disproportionate amount of your time is spent on pattern recognition (eg basing an email/piece of coding/valuation report on an earlier email/piece of coding/valuation report dealing with a similar problem). That approach was always predicated on being able to interact with a more experienced human who understood what was involved in the task at some peer review stage. But it falls apart when there is no human to discuss the earlier piece of work with, because the human no longer works there, or a human didn’t produce the earlier piece of work. The fake it until you make it approach is not going to work in environments like these where you are more likely to fake it until you break it. And pattern recognition is something an AI system will always be able to do much better and faster than you.

Instead, question everything using the capabilities you have developed. If you are going to be put into potentially compromising situations in terms of the responsibilities you are implicitly taking on, the decisions needing to be made and the limitations of the available knowledge and assumptions on which those decisions will need to be based, then this needs to be made explicit, to yourself and the people you are working with. Clarity will help the company which is trying to use these new tools in a responsible way as much as it helps you. Learning is going to be happening for them as much as it is for you here in this new landscape.

And if the company doesn’t want to have these discussions or allow you to hamper the “efficiency” of their processes by trying to regulate them effectively? Then you should leave as soon as you possibly can professionally and certainly before you become their moral crumple zone. No job is worth the loss of your professional reputation at the start of your career – these are the risks companies used to protect their senior people of the future from, and companies that are not doing this are clearly not thinking about the future at all. Which is likely to mean that they won’t have one.

To return to Cory Doctorow:

Science fiction’s superpower isn’t thinking up new technologies – it’s thinking up new social arrangements for technology. What the gadget does is nowhere near as important as who the gadget does it for and who it does it to.

You are going to have to be the generation who works these things out first for these new AI tools. And you will be reshaping the industrial landscape for future generations by doing so.

And the job of the university and further education sectors will increasingly be to equip you with both the skills and the capabilities to manage this process, whatever your course title.