The rear view mirror isn’t going to help us any more Source: Wikimedia Commons: Shattered right-hand side mirror on a 5-series BMW in Durham, North Carolina by Ildar Sagdejev

I would like to start this week’s post with a quote from Carlo Iacono, from a Substack piece he did a couple of weeks ago called The Questions Nobody Is Funding:

What is a human being for? What do we owe the future? What remains worth the difficulty of learning?

These are not questions you will find in the OECD’s AI Literacy Framework. They are not addressed in the World Economic Forum’s Education 4.0 agenda. They do not appear in the competency matrices cascading through national education systems. Instead, we get learning objectives and assessment criteria. Employability outcomes and digital capabilities. The language of preparation, as if the future were already decided and our job were simply to ready people for it.

I think this articulates well the central challenge of AI for education. Whether you think this is the beginning of a future where augmented humans move into a different type of existence to any we have known before; or you believe very little will be left behind in the rubble from the inevitable burst of the AI bubble when it comes and will be, at least temporarily, forgotten in the most devastating stock market crash and depression for a century; or you hold both these beliefs at the same time; or you are somewhere in between, it is difficult to see how the orderly world of competency matrices, learning objectives, assessment criteria, employability outcomes and digital capabilities can easily survive the period of technological, cultural, economic and political disruption which we appear to have entered. Looking in the rear view mirror and trying to extrapolate what you see into the future is not going to work for us any more.

Whether you think, like Cory Doctorow, in his recent speech at the University of Washington called The Reverse Centaur’s Guide to Criticizing AI, that:

AI is the asbestos in the walls of our technological society, stuffed there with wild abandon by a finance sector and tech monopolists run amok. We will be excavating it for a generation or more.

Or you think, as Henry Farrell has suggested in another article called Large Language Models As The Tales That Are Sung:

Technologies such as LLMs are neither going to transcend humanity as the holdouts on one side still hope, nor disappear, as other holdouts might like. We’re going to have to figure out ways to talk about them better and more clearly.

We are certainly going to have to figure out ways to talk about LLMs and other forms of AI more clearly, so that the decisions we need to make about how to accommodate them into society can be made with the maximum level of participation and consensus. And this seems to be the key for me with respect to education too. We do need people graduating from our education system understanding clearly what LLMs can and cannot do, which is a tricky path to navigate at the moment as a lot of money is being concentrated on persuading you that it can do pretty much anything. One example here has created a writers’ room of four LLMs where they are asked to critique each other by pushing the output from one into the prompts for the others, reminiscent of The Human Centipede. Which immediatel reminded me of this take from later in that Cory Doctorow speech:

And I’ll never forget when one writer turned to me and said, “You know, you prompt an LLM exactly the same way an exec gives shitty notes to a writers’ room. You know: ‘Make me ET, except it’s about a dog, and put a love interest in there, and a car chase in the second act.’ The difference is, you say that to a writers’ room and they all make fun of you and call you a fucking idiot suit. But you say it to an LLM and it will cheerfully shit out a terrible script that conforms exactly to that spec (you know, Air Bud).”

So, back to Carlo’s little questions:

What is a human being for?

A lofty question certainly, and not one I am going to tackle in a blog post. But perhaps I can say a bit about what a human being is not for. This is the key to Henry Farrell’s piece which is his take on the humanist critique of AI. We are presumably primarily designing the future for humans. All humans. Not just Tech Bros. And the design needs to bear that in mind. For example, a human being is not, in my opinion, for this (from the Cory Doctorow link):

Like an Amazon delivery driver, who sits in a cabin surrounded by AI cameras, that monitor the driver’s eyes and take points off if the driver looks in a proscribed direction, and monitors the driver’s mouth because singing isn’t allowed on the job, and rats the driver out to the boss if they don’t make quota.

The driver is in that van because the van can’t drive itself and can’t get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance. But the driver is human, so the van doesn’t just use the driver. The van uses the driver up.

The first task of the education establishment, I think, is to attempt to protect the graduate from becoming the reverse-centaur described above, whether a deliver driver, a coder (where additionally the human-in-the-loop becomes the accountability sink for everything the AI gets wrong) or a radiologist. This will often be resisted by the employers you are currently very sensitive to the needs of as educators (many of who are senior enough to get to use the new technologies as a centaur rather than be used by them as a reverse-centaur, tend to struggle to put themselves in anyone else’s shoes and, frankly, can’t see what all the fuss is about) but, remember, the cosy world of employability outcomes is over. The employers are not sticking to the implicit agreement to employ your graduates if you delivered the outcomes and therefore neither should you. Your responsibility in education is to the students, not their potential future employers, now their interests no longer appear to be aligned.

What do we owe the future?

This depends on what you mean by “the future” of course. If it is some technological dystopia of diminished opportunities for most (even for making friends as seemingly envisioned by some of the top Tech Bros), then nothing at all. But if it is the future which is going to support your children and their children, you obviously owe it a lot. But what do you owe it? What is owed is often converted into money by the political right, and used to justify not running up public debt in the present so as not to “impoverish” future generations. What that approach generally achieves is to impoverish both the current and future generations.

But if you think of owing resources, institutions and infrastructure to the next generation, then that is a responsibility that we should take seriously. And part of that is to produce an educated generation with tools, systems, institutions and infrastructure. The education institutions must take steps to make sure they survive in a relevant way, embedded in systems which support individuals and proselytising the value of education for all. They must ensure that their graduates understand and have facility with the essential tools they will need, and have developed the ability to learn new skills as they need them, and realise when that is. This is about developing individuals who leave no longer dependent on the institutions, able to work things out for themselves rather than requiring never-ending education inside an institution.

What remains worth the difficulty of learning?

The skills already mentioned will be the core ones for everyone, and these will need to be hammered out in terms everyone can understand. But in the world of post scarcity education, which is here but which we have not yet fully embraced, the rest will be up to us. A large part of the education of the future will need to be about equipping us all to understand what we now have access to and when and how to access it. We will all have different things we are interested in, or end up involved with and needing to be educated about. It will be up to each of us to decide which things are worth the difficulty of learning, but to make those decisions we will need education that can support the development of judgement.

For education institutions, the question will be what is not worth the difficulty of learning? Credentialising based on now relatively meaningless assessment methods will not cut it. This is where the confrontation with employers and politicians is likely to come. Essential skills and their related knowledge will be better developed and assessed via more open-ended project work and online assessment of it to check understanding. These will need to become the norm, with written examinations becoming less and less prevalent. Not because of fear of cheating and plagiarism, but because an outcome which can be replicated that easily by AI is not worth assessing in the first place.

As William Gibson apparently said at some point in 1992:

“The future has arrived — it’s just not evenly distributed yet.”

The future of education will be the distribution problem.

To Generation Z: a message of support from a Boomer

So you’ve worked your way through school and now university, developing the skills you were told would always be in high demand, credentialising yourself as a protection against the vagaries of the global economy. You may have serious doubts about ever being able to afford a house of your own, particularly if your area of work is very concentrated in London…

…and you resent the additional tax that your generation pays to support higher education:

Source: https://taxpolicy.org.uk/2023/09/24/70percent/

But you still had belief in being able to operate successfully within the graduate market.

A rational functional graduate job market should be assessing your skills and competencies against the desired attributes of those currently performing the role and making selections accordingly. That is a system both the companies and graduates can plan for.

It is very different from a Rush. The first phenomenon known as a Rush was the Californian Gold Rush of 1848-55. However the capitalist phenomenon of transforming an area to facilitate intensive production probably dates from sugar production in Madeira in the 15th century. There have been many since, but all neatly described by this Punch cartoon from 1849:

A Rush is a big deal. The Californian Gold Rush resulted in the creation of California, now the 5th largest economy in the world. But when it comes to employment, a Rush is not like an orderly jobs market. As Carlo Iacono describes, in an excellent article on the characteristics of the current AI Rush:

The railway mania of the 1840s bankrupted thousands of investors and destroyed hundreds of companies. It also left Britain with a national rail network that powered a century of industrial dominance. The fibre-optic boom of the late 1990s wiped out about $5 trillion in market value across the broader dot-com crash. It also wired the world for the internet age.

A Rush is a difficult and unpredictable place to build a career, with a lot riding on dumb luck as much as any personal characteristics you might have. There is very little you can count on in a Rush. This one is even less predictable because as Carlo also points out:

When the railway bubble burst in the 1840s, the steel tracks remained. When the fibre-optic bubble burst in 2001, the “dark fibre” buried in the ground was still there, ready to carry traffic for decades. These crashes were painful, but they left behind durable infrastructure that society could repurpose.

Whereas the 40–60% of US real GDP growth in the first half of 2025 explained by investment in AI infrastructure isn’t like that:

The core assets are GPUs with short economic half-lives: in practice, they’re depreciated over ~3–5 years, and architectures are turning over faster (Hopper to Blackwell in roughly two years). Data centres filled with current-generation chips aren’t valuable, salvageable infrastructure when the bubble bursts. They’re warehouses full of rapidly depreciating silicon.

So today’s graduates are certainly going to need resilience, but that’s just what their future employers are requiring of them. They also need to build their own support structures which are going to see them through the massive disruption which is coming whether or not the enormous bet on AI is successful or not. The battle to be centaurs, rather than reverse-centaurs, as I set out in my last post (or as Carlo Iacono describes beautifully in his discussion of the legacy of the Luddites here), requires these alliances. To stop thinking of yourselves as being in competition with each other and start thinking of yourselves as being in competition for resources with my generation.

I remember when I first realised my generation (late Boomer, just before Generation X) was now making the weather. I had just sat a 304 Pensions and Other Benefits actuarial exam in London (now SP4 – unsuccessfully as it turned out), and nipped in to a matinee of Sam Mendes’ American Beauty and watched the plastic bag scene. I was 37 at the time.

My feeling is that despite our increasingly strident efforts to do so, our generation is now deservedly losing power and is trying to hang on by making reverse centaurs of your generation as a last ditch attempt to remain in control. It is like the scene in another movie, Triangle of Sadness, where the elite are swept onto a desert island and expect the servant who is the only one with survival skills in such an environment to carry on being their servant.

Don’t fall for it. My advice to young professionals is pretty much the same as it was to actuarial students last year on the launch of chartered actuary status:

If you are planning to join a profession to make a positive difference in the world, and that is in my view the best reason to do so, then you are going to have to shake a few things up along the way.

Perhaps there is a type of business you think the world is crying out for but it doesn’t know it yet because it doesn’t exist. Start one.

Perhaps there is an obvious skill set to run alongside your professional one which most of your fellow professionals haven’t realised would turbo-charge the effectiveness of both. Acquire it.

Perhaps your company has a client who noone has taken the time to put themselves in their shoes and communicate in a way they will properly understand and value. Be that person.

Or perhaps there are existing businesses who are struggling to manage their way in changing markets and need someone who can make sense of the data which is telling them this. Be that person.

All why remaining grounded in which ever community you have chosen for yourself. Be the member of your organisation or community who makes it better by being there.

None of these are reverse centaur positions. Don’t settle for anything less. This is your time.

In 2017, I was rather excitedly reporting about ideas which were new to me at the time regarding how technology or, as Richard and Daniel Susskind referred to it in The Future of the Professions, “increasingly capable machines” were going to affect professional work. I concluded that piece as follows:

The actuarial profession and the higher education sector therefore need each other. We need to develop actuaries of the future coming into your firms to have:

  • great team working skills
  • highly developed presentation skills, both in writing and in speech
  • strong IT skills
  • clarity about why they are there and the desire to use their skills to solve problems

All within a system which is possible to regulate in a meaningful way. Developing such people for the actuarial profession will need to be a priority in the next few years.

While all of those things are clearly still needed, it is becoming increasingly clear to me now that they will not be enough to secure a job as industry leaders double down.

Source: https://www.ft.com/content/99b6acb7-a079-4f57-a7bd-8317c1fbb728

And perhaps even worse than the threat of not getting a job immediately following graduation is the threat of becoming a reverse-centaur. As Cory Doctorow explains the term:

A centaur is a human being who is assisted by a machine that does some onerous task (like transcribing 40 hours of podcasts). A reverse-centaur is a machine that is assisted by a human being, who is expected to work at the machine’s pace.

We have known about reverse-centaurs since at least Charlie Chaplin’s Modern Times in 1936.

By Charlie Chaplin – YouTube, Public Domain, https://commons.wikimedia.org/w/index.php?curid=68516472

Think Amazon driver or worker in a fulfillment centre, sure, but now also think of highly competitive and well-paid but still ultimately human-in-the-loop kinds of roles being responsible for AI systems designed to produce output where errors are hard to spot and therefore to stop. In the latter role you are the human scapegoat, in the phrasing of Dan Davies, “an accountability sink” or in that of Madeleine Clare Elish, a “moral crumple zone” all rolled into one. This is not where you want to be as an early career professional.

So how to avoid this outcome? Well obviously if you have other options to roles where a reverse-centaur situation is unavoidable you should take them. Questions to ask at interview to identify whether the role is irretrievably reverse-centauresque would be of the following sort:

  1. How big a team would I be working in? (This might not identify a reverse-centaur role on its own: you might be one of a bank of reverse-centaurs all working in parallel and identified “as a team” while in reality having little interaction with each other).
  2. What would a typical day be in the role? This should smoke it out unless the smokescreen they put up obscures it. If you don’t understand the first answer, follow up to get specifics.
  3. Who would I report to? Get to meet them if possible. Establish whether they are technical expert in the field you will be working in. If they aren’t, that means you are!
  4. Speak to someone who has previously held the role if possible. Although bear in mind that, if it is a true reverse-centaur role and their progress to an actual centaur role is contingent on you taking this one, they may not be completely forthcoming about all of the details.

If you have been successful in a highly competitive recruitment process, you may have a little bit of leverage before you sign the contract, so if there are aspects which you think still need clarifying, then that is the time to do so. If you recognise some reverse-centauresque elements from your questioning above, but you think the company may be amenable, then negotiate. Once you are in, you will understand a lot more about the nature of the role of course, but without threatening to leave (which is as damaging to you as an early career professional as it is to them) you may have limited negotiation options at that stage.

In order to do this successfully, self knowledge will be key. It is that point from 2017:

  • clarity about why they are there and the desire to use their skills to solve problems

To that word skills I would now add “capabilities” in the sense used in a wonderful essay on this subject by Carlo Iacono called Teach Judgement, Not Prompts.

You still need the skills. So, for example, if you are going into roles where AI systems are producing code, you need to have sufficiently good coding skills yourself to create a programme to check code written by the AI system. If the AI system is producing communications, your own communication skills need to go beyond producing work that communicates to an audience effectively to the next level where you understand what it is about your own communication that achieves that, what is necessary, what is unnecessary, what gets in the way of effective communication, ie all of the things that the AI system is likely to get wrong. Then you have a template against which to assess the output from an AI system, and for designing better prompts.

However specific skills and tools come and go, so you need to develop something more durable alongside them. Carlo has set out four “capabilities” as follows:

  1. Epistemic rigour, which is being very disciplined about challenging what we actually know in any given situation. You need to be able to spot when AI output is over-confident given the evidence, or when a correlation is presented as causation. What my tutors used to refer to as “hand waving”.
  2. Synthesis is about integrating different perspectives into an overall understanding. Making connections between seemingly unrelated areas is something AI systems are generally less good at than analysis.
  3. Judgement is knowing what to do in a new situation, beyond obvious precedent. You get to develop judgement by making decisions under uncertainty, receiving feedback, and refining your internal models.
  4. Cognitive sovereignty is all about maintaining your independence of thought when considering AI-generated content. Knowing when to accept AI outputs and when not to.

All of these capabilities can be developed with reflective practice, getting feedback and refining your approach. As Carlo says:

These capabilities don’t just help someone work with AI. They make someone worth augmenting in the first place.

In other words, if you can demonstrate these capabilities, companies who themselves are dealing with huge uncertainty about how much value they are getting from their AI systems and what they can safely be used for will find you an attractive and reassuring hire. Then you will be the centaur, using the increasingly capable systems to improve your own and their productivity while remaining in overall control of the process, rather than a reverse-centaur for which none of that is true.

One sure sign that you are straying into reverse-centaur territory is when a disproportionate amount of your time is spent on pattern recognition (eg basing an email/piece of coding/valuation report on an earlier email/piece of coding/valuation report dealing with a similar problem). That approach was always predicated on being able to interact with a more experienced human who understood what was involved in the task at some peer review stage. But it falls apart when there is no human to discuss the earlier piece of work with, because the human no longer works there, or a human didn’t produce the earlier piece of work. The fake it until you make it approach is not going to work in environments like these where you are more likely to fake it until you break it. And pattern recognition is something an AI system will always be able to do much better and faster than you.

Instead, question everything using the capabilities you have developed. If you are going to be put into potentially compromising situations in terms of the responsibilities you are implicitly taking on, the decisions needing to be made and the limitations of the available knowledge and assumptions on which those decisions will need to be based, then this needs to be made explicit, to yourself and the people you are working with. Clarity will help the company which is trying to use these new tools in a responsible way as much as it helps you. Learning is going to be happening for them as much as it is for you here in this new landscape.

And if the company doesn’t want to have these discussions or allow you to hamper the “efficiency” of their processes by trying to regulate them effectively? Then you should leave as soon as you possibly can professionally and certainly before you become their moral crumple zone. No job is worth the loss of your professional reputation at the start of your career – these are the risks companies used to protect their senior people of the future from, and companies that are not doing this are clearly not thinking about the future at all. Which is likely to mean that they won’t have one.

To return to Cory Doctorow:

Science fiction’s superpower isn’t thinking up new technologies – it’s thinking up new social arrangements for technology. What the gadget does is nowhere near as important as who the gadget does it for and who it does it to.

You are going to have to be the generation who works these things out first for these new AI tools. And you will be reshaping the industrial landscape for future generations by doing so.

And the job of the university and further education sectors will increasingly be to equip you with both the skills and the capabilities to manage this process, whatever your course title.

Title page vignette of Hard Times by Charles Dickens. Thomas Gradgrind Apprehends His Children Louisa and Tom at the Circus, 1870

It was Fredric Jameson (according to Owen Hatherley in the New Statesman) who first said:

“It seems to be easier for us today to imagine the thoroughgoing deterioration of the earth and of nature than the breakdown of late capitalism”. I was reminded of this by my reading this week.

It all started when I began watching Shifty, Adam Curtis’ latest set of films on iPlayer aiming to convey a sense of shifting power structures and where they might lead. Alongside the startling revelation that The Land of Make Believe by Bucks Fizz was written as an anti-Thatcher protest song, there was a short clip of Eric Hobsbawm talking about all of the words which needed to be invented in the late 18th century and early 19th to allow people to discuss the rise of capitalism and its implications. So I picked up a copy of his The Age of Revolution 1789-1848 to look into this a little further.

The first chapter of Hobsbawm’s introduction from 1962, the year of my birth, expanded on the list:

Words are witnesses which often speak louder than documents. Let us consider a few English words which were invented, or gained their modern meanings, substantially in the period of sixty years with
which this volume deals. They are such words as ‘industry’, ‘industrialist’, ‘factory’, ‘middle class’, ‘working class’, ‘capitalism’ and ‘socialism’. They include ‘aristocracy’ as well as ‘railway’, ‘liberal’ and
‘conservative’ as political terms, ‘nationality’, ‘scientist’ and ‘engineer’, ‘proletariat’ and (economic) ‘crisis’. ‘Utilitarian’ and ‘statistics’, ‘sociology’ and several other names of modern sciences, ‘journalism’ and ‘ideology’, are all coinages or adaptations of this period. So is ‘strike’ and ‘pauperism’.

What is striking about these words is how they frame most of our economic and political discussions still. The term “middle class” originated in 1812. Noone referred to an “industrial revolution” until English and French socialists did in the 1820s, despite what it described having been in progress since at least the 1780s.

Today the founder of the World Economic Forum has coined the phrase “Fourth Industrial Revolution” or 4IR or Industry 4.0 for those who prefer something snappier. Its blurb is positively messianic:

The Fourth Industrial Revolution represents a fundamental change in the way we live, work and relate to one another. It is a new chapter in human development, enabled by extraordinary technology advances commensurate with those of the first, second and third industrial revolutions. These advances are merging the physical, digital and biological worlds in ways that create both huge promise and potential peril. The speed, breadth and depth of this revolution is forcing us to rethink how countries develop, how organisations create value and even what it means to be human. The Fourth Industrial Revolution is about more than just technology-driven change; it is an opportunity to help everyone, including leaders, policy-makers and people from all income groups and nations, to harness converging technologies in order to create an inclusive, human-centred future. The real opportunity is to look beyond technology, and find ways to give the greatest number of people the ability to positively impact their families, organisations and communities.

Note that, despite the slight concession in the last couple of sentences that an industrial revolution is about more then technology-driven change, they are clear that the technology is the main thing. It is also confused: is the future they see one in which “technology advances merge the physical, digital and biological worlds” to such an extent that we have “to rethink” what it “means to be human”? Or are we creating an “inclusive, human-centred future”?

Hobsbawm describes why utilitarianism (” the greatest happiness of the greatest number”) never really took off amongst the newly created middle class, who rejected Hobbes in favour of Locke because “he at least put private property beyond the range of interference and attack as the most basic of ‘natural rights'”, whereas Hobbes would have seen it as just another form of utility. This then led to this natural order of property ownership being woven into the reassuring (for property owners) political economy of Adam Smith and the natural social order arising from “sovereign individuals of a certain psychological constitution pursuing their self-interest in competition with one another”. This was of course the underpinning theory of capitalism.

Hobsbawm then describes the society of Britain in the 1840s in the following terms:

A pietistic protestantism, rigid, self-righteous, unintellectual, obsessed with puritan morality to the point where hypocrisy was its automatic companion, dominated this desolate epoch.

In 1851 access to the professions in Britain was extremely limited, requiring long years of education to support oneself through and opportunities to do so which were rare. There were 16,000 lawyers (not counting judges) but only 1,700 law students. There were 17,000 physicians and surgeons and 3,500 medical students and assistants. The UK population in 1851 was around 27 million. Compare these numbers to the relatively tiny actuarial profession in the UK today, with around 19,000 members overall in the UK.

The only real opening to the professions for many was therefore teaching. In Britain “76,000 men and women in 1851 described themselves as schoolmasters/mistresses or general teachers, not to mention the 20,000 or so governesses, the well-known last resource of penniless educated girls unable or unwilling to earn their living in less respectable ways”.

Admittedly most professions were only just establishing themselves in the 1840s. My own, despite actuarial activity getting off the ground in earnest with Edmund Halley’s demonstration of how the terms of the English Government’s life annuities issue of 1692 were more generous than it realised, did not form the Institute of Actuaries (now part of the Institute and Faculty of Actuaries) until 1848. The Pharmaceutical Society of Great Britain (now the Royal Pharmaceutical Society) was formed in 1841. The Royal College of Veterinary Surgeons was established by royal charter in 1844. The Royal Institute of British Architects (RIBA) was founded in 1834. The Society of Telegraph Engineers, later the Institute of Electrical Engineers (now part of the Institute of Engineering and Technology), was formed in 1871. The Edinburgh Society of Accountants and the Glasgow Institute of Accountants and Actuaries were granted royal charters in the mid 1850s, before England’s various accounting institutes merged into the Institute of Chartered Accountants in England and Wales in 1880.

However “for every man who moved up into the business classes, a greater number necessarily moved down. In the second place economic independence required technical qualifications, attitudes of mind, or financial resources (however modest) which were simply not in the possession of most men and women.” As Hobsbawm goes on to say, it was a system which:

…trod the unvirtuous, the weak, the sinful (i.e. those who neither made money nor controlled their emotional or financial expenditures) into the mud where they so plainly belonged, deserving at best only of their betters’ charity. There was some capitalist economic sense in this. Small entrepreneurs had to plough back much of their profits into the business if they were to become big entrepreneurs. The masses of new proletarians had to be broken into the industrial rhythm of labour by the most draconic labour discipline, or left to rot if they would not accept it. And yet even today the heart contracts at the sight of the landscape constructed by that generation.

This was the landscape upon which the professions alongside much else of our modern world were constructed. The industrial revolution is often presented in a way that suggests that technical innovations were its main driver, but Hobsbawm shows us that this was not so. As he says:

Fortunately few intellectual refinements were necessary to make the Industrial Revolution. Its technical inventions were exceedingly modest, and in no way beyond the scope of intelligent artisans experimenting in their workshops, or of the constructive capacities of carpenters, millwrights and locksmiths: the flying shuttle, the spinning jenny, the mule. Even its scientifically most sophisticated machine, James Watt’s rotary steam-engine (1784), required no more physics than had been available for the best part of a century—the proper theory of steam engines was only developed ex post facto by the Frenchman Carnot in the 1820s—and could build on several generations of practical employment for steam engines, mostly in mines.

What it did require though was the obliteration of alternatives for the vast majority of people to “the industrial rhythm of labour” and a radical reinvention of the language.

These are not easy things to accomplish which is why we cannot easily imagine the breakdown of late capitalism. However if we focus on AI etc as the drivers of the next industrial revolution, we will probably be missing where the action really is.

On Wednesday last week the report from the Leng Review into the safety and effectiveness of physician associates (PAs) and anaesthesia associates (AAs) was published. Although it concluded that:

Research on the safety and effectiveness of PAs and AAs was limited, generally of low quality and either inconclusive or demonstrated a mixed picture.

This apparently did not prevent Professor Leng from feeling able to go right ahead and make 18 recommendations. Neither did it prevent NHS England announcing the same day that it would be expecting all PAs and AAs in the NHS to immediately:

  1. Take on the new names for their roles of physician assistant and physician assistant in anaesthesia respectively;
  2. No longer triage patients or see “undifferentiated” patients.

The rationale for the first of these was the fear that PAs and AAs were being confused with doctors. That this has been addressed by immediately making PAs and AAs much more confusable with each other is just one of the many hilarious things about this report. They also appear to have forgotten to let the General Medical Council (GMC) know, as their website still looks like this:

Then there is the meticulously recorded bile directed at PAs and AAs and their capabilities throughout what is described all over the website as an “independent” report. There were several charts of the opinions of PAs and AAs about their ability to carry out their duties compared to those of doctors. Here is one of them:

The fact I feel able to describe this as mostly bile is the template job descriptions at Appendix 5 of the Leng report. The one for PAs in secondary care includes the following principal duties and responsibilities:

  • carry out assessments of patient health by interviewing patients and performing
    physical examination including obtaining and updating medical histories (looks like B and E);
  • order and perform agreed diagnostic tests including laboratory studies and
    interpret test results (looks like J);
  • perform basic therapeutic procedures by administering all injections and
    immunisations, suturing and managing wounds and infections (looks like M);
  • help to develop other members of the multidisciplinary team by providing
    information and educational opportunities as appropriate (looks like L).

So even the Leng Review appears to have concluded that many of the doctors’ opinions polled here are ridiculous.

Of course I am lumping all doctors together here because the Leng Review does for the most part. There is one sentence where it is admitted that senior doctors, including GPs, tended to be more positive than resident doctors, but this is not really quantified.

The Leng Review will not be the last of its kind. It has taken up the concerns of a threatened profession and worked with them to connive in the othering of another sub-profession (set up, as admitted in the Leng Review report itself, by the Department of Health under, in the case of PAs, a competency framework in conjunction with the Royal Colleges of Physicians and General Practitioners) rather than tackle the actual threats the profession faces. As Roy Lilley wrote:

The BMA can stand in the way, or stand at the front, shaping how technology and new roles like PAs can improve care, close gaps, and make healthcare safer and smarter.

History teaches us that you can’t halt progress by breaking the machinery or driving new careers into a cul-de-sac.

So why are the doctors, particularly resident doctors (formerly known as junior doctors), so offended by the use of PAs and AAs in the NHS? Is it really about safety and effectiveness? Or is it that the British Medical Association (BMA) has finally lost the trust of its more junior members after years of inadequate representation and now is throwing its weight around with the campaign against PAs and AAs and now the resident doctor strike in a desperate attempt to convince them that the reason they are paid less than PAs and can’t get a job after graduation is not the fault of the BMA, but that of the Government, PAs and AAs?

As the Leng Review admits:

Since the early 2000s, and in response to increasing workforce pressures, there has been a growing recognition of the PA role across the globe as a flexible way to address doctor shortages and improve access to healthcare. Today, PAs or their equivalents are employed in over 50 countries, although the role is often adapted locally to meet specific healthcare system needs.

Is it perhaps this very flexibility which is the threat here, when NHS England are already reviewing postgraduate medical training due in large part to resident doctors’ “concerns and frustrations with their training experience”?

The doctors are not the only threatened profession. According to The Observer this week:

The big four accounting firms – Deloitte, EY, PricewaterhouseCoopers and KPMG – posted 44% fewer jobs for graduates this year compared with 2023.

These are the big beasts for finance and actuarial graduates and tend to set the market for everyone else, so these are big changes. Ian Pay of the ICAEW’s quote from the article is even more alarming:

Historically, accountancy firms have typically had a pyramid structure – wide base, heavy graduate recruitment. Firms are now starting to talk about a ‘diamond model’ with a wide middle tier of management because, ultimately, AI is not sophisticated enough yet to make those judgment calls.

A diamond model? That surely only makes sense for those at partner level currently interested in the purchase of diamonds? Sure enough, the article continues:

Cuts to graduate cohorts since 2023 have ranged from 6% at PwC to 29% at KPMG. According to James O’Dowd, founder of talent adviser Patrick Morgan, these are accompanied by senior employees being paid more and more job offshoring. Up to a third of some firms’ administrative tasks are carried out in countries with lower labour costs such as India and the Philippines.

So what happens when AI is sophisticated enough to make those judgement calls, calls which are often sophisticated forms of pattern spotting and which, quite frankly, AI systems are already much better than humans at in many cases already? Will the diamond model collapse still further into a “T-model” perhaps, with the very senior survivors being paid even more? Don’t expect labour costs in India and the Philippines to remain lower for very long as demand increases from their own economies as well as ours.

And the most important question? What then? Who will the senior employees who seem to be doing so well out of this at the moment be in 20-30 years’ time? Where will they have come from? What experience will they have and how will they have gained it when all the opportunities to do so have been given to the system in the corner which never gets tired, only makes mistakes when it is poorly programmed or fed poor data, and never takes study leave at the financial year end?

So Medicine, Finance and now Law. Richard Susskind has been writing about the impact of AI on Law, and with his son Daniel, on other professions too for some time now. The review of his latest book, How To Think About AI, has the reviewer wondering “Where has Reassuring Richard gone?”. In his latest book, Susskind says:

“Pay heed, professionals – the competition that kills you won’t look like you.”

So probably a threatened profession there too then.

In the 1830s and 1840s, according to Christopher Clark’s excellent Revolutionary Spring, the new methods of production led to “the emergence of a non-specialised, mobile labour force whose ‘structural vulnerability’ made it more likely that they would experience the most wretched poverty at certain points in their lives.” The industrialised economies changed beyond recognition and the guilds representing workers, with skills the need for which were being automated away, retreated to become largely ceremonial.

Then the divisions were those of class. This time they appear to be those of generation. Early career professionals are seeing their pay, conditions and status under threat as their more senior colleagues protect their own positions at their expense.

It remains to be seen what will happen to our threatened professions, but it seems unlikely that they will survive in their current forms any more than the jobs of their members will.

Last week I read The Million Pound Bank Note by Mark Twain and Brewster’s Millions by George Barr McCutcheon, from 1893 and 1902 respectively. Both have been made into films several times: the Mark Twain short story was first made into a silent movie by the great Alexander Korda in 1916, although the best known adaptations were the one starring Gregory Peck in 1954 and Trading Places (starring Eddie Murphy) in 1983 (which included elements of both The Million Pound Bank Note and Mark Twain’s novel The Prince and the Pauper); Cecil B DeMille was the first to attempt a film adaptation of Brewster’s Millions (from the earlier play) in 1914, with the best known adaptation being Walter Hill’s 1985 movie starring Richard Pryor (movie poster shown above).

Both stories were written before the First World War and it is interesting to see when each has been revived with new adaptations. In particular, although an early attempt was made to film Twain’s story, noone attempted it again until after the second world war, whereas there was a new adaptation of Brewster during the very interesting period between 1920 and 1922 when the first international financial conferences were being held in Brussels and Genoa to establish an international consensus for policies where “individuals had to work harder, consume less, expect less from the government as a social actor, and renounce any form of labour action that would impede the flow of production.” The aim was to return to a pre World War I economic orthodoxy and therefore remove what would be very painful economic measures for most people from the political sphere and into the sphere of “economic science”. In other words, it was a time when the political elite were trying to change the rules of the game.

This may be because Twain’s story, about a man who is given a million pound note and is feted by everyone he meets as a consequence and never has to spend it, winning a bet between the two men who gave him it as a consequence, was seen as a rather slight tale. Interestingly an American TV adaptation and the Gregory Peck film a few years later came out around the time when the Bank of England actually first issued such notes (called Giants) in 1948, which also relied on the power of people knowing they were there rather than ever having to use them.

The rules of the game certainly vary considerably across the Brewster adaptations: DeMille in 1914 was very respectful of the original but by 1921 the $7 million dollars had shrunk to $4 million. By 1926 in Miss Brewster’s Millions, Polly Brewster must spend $1 million dollars in 30 days to inherit $5 million. This was the point where Twenty20 fortune dissipation appears to have supplanted the Test Match variety. In 1935 a British version had Brewster needing to spend £500,000 in 6 months to inherit £6 million. In 1945 Brewster must spend $1 million dollars within 60 days to inherit $7 million. By 1954 the first Telugu adaptation has him spending ₹1 lakh in 30 days which, by 1985, has inflated to ₹25 lakh.

Later in 1985, the Richard Pryor film requires Brewster to spend $30 million within 30 days to inherit $300 million, with the tweak that he is given the option to take $1 million upfront, which for the sake of the movie he doesn’t. There have since been five further adaptations reflecting the globalisation of the ideas in the story (three from India, one from Brazil and one from China) before the sequel to the Richard Pryor film last year.

What is striking about both stories is how, although supposedly about financial transactions, albeit of a rather unusual kind, they are in fact all about how people behave around the display of money. In Twain’s tale, Henry Adams is transformed from being perceived as a beggar to being assumed to be an eccentric millionaire as a result of producing the note.

In the Brewster story, Monty Brewster has to spend the million dollars he has been left by his grandfather within a year so that he has no assets left in order to claim the seven million dollars left to him by an uncle on this condition. The original story explains the strange condition (something the Richard Pryor film doesn’t do as far as I can recall) as being due to his uncle hating his grandfather so much (due to his grandfather’s refusal to accept his uncle’s sister’s marriage). The uncle therefore wanted “to preclude any possible chance of the mingling of his fortune with the smallest portion of Edwin P Brewster’s”.

The problem for Monty is that he is not allowed to tell anyone of the condition, and therefore it is the difficulties the behaviour he then has to adopt causes him with New York high society that is the subject of the story. There are dinners and cruises and carnivals and holiday homes all bankrolled by Brewster for himself and whoever will journey with him, during which he falls in love and then out of love with one woman and then falls in love with the woman he had grown up alongside. Things normally regarded as good luck, like winning a bet or making a profitable investment, become bad luck for Monty.

By the end of the year, and very close to spending the whole million with nothing to show for it, he returns from a transatlantic cruise where he had been kidnapped by his friends at one stage to prevent him sailing to South Africa, to find himself spurned by the very society he had tried so hard to cultivate:

With the condemnation of his friends ringing in his troubled brain, with the sneers of acquaintances to distress his pride, with the jibes of the comic papers to torture him remorselessly, Brewster was fast becoming the most miserable man in New York. Friends of former days gave him the cut direct, clubmen ignored him or scorned him openly, women chilled him with the iciness of unspoken reproof, and all the world was hung with shadows. The doggedness of despair kept him up, but the strain that pulled down on him was so relentless that the struggle was losing its equality. He had not expected such a home-coming.

After a bit of a scare that the mysterious telegram correspondent Swearengen Jones, who held the 7 million and was assessing his performance, had disappeared, everything comes right for Monty in the end and he marries Peggy who had agreed to do so even when she thought him penniless.

And we are left to assume that everything in the previous paragraph is reversed in the same way as in The Million Pound Bank Note on being able to display wealth once more.

There is a lot of plot in the Brewster story in particular, a lot of which does not amount to much but keeps Monty Brewster feverishly busy throughout.

These two in many ways ridiculous stories, written as they are just as economics is trying to establish itself as a science and ultimately the discipline that shapes our current societies, I think reveal quite a lot about the nature of money amongst people who have a lot of it. Neither Henry nor Monty (apart from an opening twenty four hours for Henry and a scene revolving around a pear in the gutter after a night sleeping rough) experience hunger or the absence of anywhere to sleep at any point. Their concern for money seems to be entirely about social position, the respect of who they regard as their peers and being able to marry the women they have set their hearts on. In other words, money is not about money for these protagonists, it is about status.

It seems to me that almost the entire edifice that we call economics now has possibly been constructed by people in this position. Is this why money creation is represented in so many economic models via constructions clearly at odds with the actual activities of banks (one of many pieces by Steve Keen demonstrating this problem here), and why ideas such as loanable funds and the money multiplier, persist in economics education? Perhaps the original architects of these economic theories did not need money to live, as much as they needed the respect of who they saw as their peers.

David Graeber often used to point out how much more time people at the bottom of society spent thinking about people at the top than the people at the top spent thinking about them. Is this at the heart of the problem?

Of course we do still have some social mobility. A relatively small number of people from poor backgrounds can still enter influential professions. Some of them have even become economists! Of course the very process of becoming a professional is designed to distance you from your origins: years of immersion in a very academic discipline, requiring total concentration and dedication to internalising enough of the professional “truths” learnt so as to be assessed as qualified to practise, normally while engaged in highly intensive work alongside more senior people for who these truths have already been securely internalised.

And then once there you are in the Monty Brewster situation, so insecure about your position within this new society you have joined that you will do whatever it takes to maintain it. You are “upwardly mobile”. Your families are proud that you are “getting on” and doing better, certainly in terms of income and professional respect, than they did. There is no serious challenge to this path other than its difficulty, which again creates a massive sunk cost in your mind when considering alternatives. And it is a path which is invariably described as upward.

Meanwhile the societies we have constructed around these economic edifices also have a lot of plot, a lot of which does not amount to very much but it keeps us all feverishly busy most of the time.

I went to see A Complete Unknown this weekend. The music was rendered brilliantly, Timothee Chalamet inhabited the character of Dylan compellingly and Edward Norton was astonishing as Pete Seeger. And I felt welling emotion watching it.

I first was really aware of Dylan in the 70s, when I was most intensely interested in music for the first time more generally. However I didn’t really like 70s Dylan. I particularly didn’t like the arrangements on Bob Dylan at Budokan (Live), which seemed to be omnipresent at times. I then got interested again in the 80s when he repelled many of his fans with the religious records and went back to the 60s stuff on the back of that, which resonated with me very deeply. In the 90s and noughties I got interested all over again with Time Out Of Mind and Modern Times. I finally got to see him play in 2010 in Birmingham and, like most people who tried, failed to get tickets to see him last year in Wolverhampton. And that is my history with Bob. However this piece isn’t really about that.

In A Complete Unknown we see Dylan arrive in New York in 1961 at the age of 20 and follow him all the way to the July 1965 Newport Folk Festival when he went electric for the first time at the age of 24. So these are the doings of a very young man, whom Joan Baez refers to as “kind of an asshole” in the film.

This got me thinking about what I did between the ages of 20 and 24. To quote another Dylan line, I “just kind of wasted my precious time”. I wasted most of it at the University of Oxford. I had spent seven very happy years at a school in Oxford before going there, five of them actually living in the city as a boarder, so my unhappiness was definitely with the people and institutions of the university rather than their location. And I was seen as so much of an asshole myself that I left with no friends from my university days other than people I had known before going there and a group of chemists from a different college who I ended up sharing a house with in my middle year because noone in my own college wanted to.

However unlike Dylan, whose assholery clearly had a purpose and was for him a way of getting his art done in the way he wanted to do it, mine was of a more self-pitying unproductive kind. I hated the structures the very confident people were building around me but followed them anyway, all the way into my first job which was for a company which made ID cards for the Chilean and Syrian regimes. I realise now, thanks to the excellent Butler to the World by Oliver Bullough among many other things I have read since, that I was being prepared for a career of facilitating power and, although I would not have been able to articulate this at the time, I like to think that I resented this on some level even then.

It took me another 20 years to recover from my university education and those structures of power seem more confident than ever. However now I realise how brittle that confidence is and how little we know about the foundations we base it on, I feel much more optimistic about the prospects for challenging it and putting something kinder in its place.

I went for a walk to mull over how to finish this piece earlier and today I got a bit of help. Heading back via the newsagents where I like to monitor the front pages each day, I was just taking in how they all seemed to be celebrating the return of the three Israeli hostages when a man pushed past me and grabbed a Daily Mail from the front of the pile. As he turned back on his way to the till he glared at me and snarled “You’re supposed to buy them you know”, before stomping off.

By the time this gets to some of you via your inboxes Donald Trump will have been sworn in as the 47th President of the United States (POTUS), eight years on from when he became the 45th. The UK will be facilitating him like crazy over the next four years, just like we have facilitated the destruction of Gaza over the last 15 months, all cheered on by most of the media. But we don’t have to buy what they’re selling.

The Stonebreaker is an 1857 oil-on-canvas painting by Henry Wallis. It depicts a manual labourer who appears to be asleep, worn out by his work, but may have been worked to death as
his body is so still that a stoat has climbed onto his right foot
The Stone Breaker, 1857 Artist: Henry Wallis. Creative Commons 0 – Public Domain. Photo by Birmingham Museums Trust, licensed under CC0

The Europe of the 1830s and 1840s was a place of extreme political ferment which led to long-term changes to the way in which all Europeans, including the ones across the English Channel, saw themselves. According to Christopher Clark’s excellent Revolutionary Spring – Fighting for a New World 1848-1849: “parallel political tumults broke out across the entire continent, from Switzerland and Portugal to Wallachia and Moldavia, from Norway, Denmark and Sweden to Palermo and the Ionian Islands. This was the only truly European revolution that there had ever been.”

However you wouldn’t know it from the current Radical Victorians exhibition at the Birmingham Museum and Art Gallery. This explores three generations of progressive British artists working between 1840 and 1910: the Pre-Raphaelite Brotherhood and their circle; the second wave of Pre-Raphaelite artists who gathered around Rossetti from the late 1850s, including William Morris and Birmingham-born Edward Burne-Jones; and a third generation of designers and makers associated with the Arts and Crafts movement, working from the turn of the century to just before the First World War.

It’s a very good exhibition, but the only painting I could find in it which referred to the economic crises of the 1840s and 50s at all was the one above, of a stone breaker worked to death. There was also the famous one of a couple emigrating to Australia (shown below) which may be a response to domestic economic circumstances although, based on a self portrait of Madox Brown as it is, it may just as well be a response to the lack of art appreciation in the UK:

The Last of England, 1852-1855 Artist: Ford Madox Brown Creative Commons 0 – Public Domain. Optional attribution: Photo by Birmingham Museums Trust, licensed under CC0

But that is it! Despite the Victorian Radicals’ believing that art and creativity could change the world and be a real force for good in society, their gaze rarely moved from “realistic” depictions of their friends posing in rustic or suburban landscapes at a time of massive social upheaval.

At the time Britain was rather smug about having avoided revolution, but the evidence suggests that it could have easily been very different were it not for the measures taken by Robert Peel’s Government: the reintroduction of income tax on upper middle class incomes in 1842; the Bank Charter Act of 1844 which suppressed financial speculation by restricting the right to issue bank notes to the Bank of England only and creating a maximum ratio between notes issued and the Bank’s gold reserves; and the repeal of the Corn Laws in 1846 which considerably weakened the landlords’ grain monopoly and allowed for grain imports which did reduce prices but fundamentally changed the structure of the UK economy. This was explosive stuff which brought down Peel’s government and split the Conservative Party.

Policing in the UK was also very muscular. 15,000 Chartist activists were arrested in 1843 and a meeting of 150,000 Chartists at Kennington Common in 1848 was met by 4,000 police, 12,000 troops and 85,000 special constables (volunteers with clubs, including the future Emperor Napoleon III who was in exile from France at the time). There were so many transportations to the colonies that there were mass protests in Australia and the Cape. There were riots in Jamaica and British Guyana when sugar tariffs were dropped to reduce prices back in the UK and when, rather than burdening British taxpayers further, taxes were applied in Ceylon (now Sri Lanka), a protest movement numbering 60,000 was created.

In June 2024, Michael Marmot and Jessica Allen published A programme for greater health equity for the next UK government. In it they say the following:

Much of what went wrong with respect to the social determinants of health equity in the period after 2010 comes under the rubric of austerity, imposed by a Conservative Party led coalition Government. In the 2020 Marmot Review, we reported that in 2010 public sector expenditure had been 42% of GDP. Over the next decade, public sector expenditure went down year on year. By the end of the decade, public sector expenditure had become 35% of GDP. An annual reduction of 7% is enormous. In 2023, total UK GDP was £2·687 trillion. 7 7% of that is £188 billion. At today’s prices, annual public sector expenditure in 2019 was £188 billion less than it was in 2010. It is then not a surprise that relative child poverty went up— the steepest rise among 39 OECD countries; 8 absolute measures of destitution increased; welfare payments apart from pensions did not keep pace with inflation; spending on education per pupil went down; the housing shortage became more marked and homelessness and rough sleeping increased; and increases in health-care expenditure fell sharply compared with historic trends. Alongside these major changes, came the slowest improvement in life expectancy in the UK during the decade after 2010, of any rich country except Iceland and the USA.

We have a new Government, 100 days in, in our new Carolian era. What will future generations say about who this government answered to? Will it turn out to have been our modern stone breakers, working themselves into sickness and early death below the radar of a modern media at least as divorced from the concerns of ordinary people as the Victorian Radicals were? Or will their hard decisions turn out to necessitate other priorities? Time will tell.

OK I am talking about satisfaction with the NHS a little bit, as it was all over the media yesterday. Just 29% satisfaction compared to 70% in 2010, with the chart above helpfully showing the precipitous decline since then. Does that remind you of another set of graphs I put up not too long ago?

It should. We stopped spending the same proportion of GDP that other similar countries do on their health services and our performance in terms of patient satisfaction plummets. Who would have thought it?

In fact this was only a headline as the Kings Fund and Nuffield Trust had just issued their analysis of the NHS-related bits of British Social Attitudes Survey Number 39, which had originally been published in October, and was itself based on data collected between September and October 2021. However it is an impressive survey overall, with 44,000 households taking part (you can find the full technical details of the survey here).

What is very clear is that the nation is changing fast. Some things are not – a slender majority in favour of increasing taxes and spending more on health, education and social benefits has remained almost static since pre pandemic and all of the averages conceal very polarised views between Brexiteers and Remainers, the different communities in Scotland and Northern Ireland, and particularly between Londoners and the rest of the UK.

This looks like it is beginning to be recognised, with a big increase in the proportion agreeing that working people do not get their fair share of the nation’s wealth (up to 67% compared to 57% in 2019) and, for the first time, a slim majority in favour of moving to proportional representation.

Only 17% say it is very important for being truly British to have been born in Britain, which is down from 48% in 1995, which feels like a sea change in attitudes towards immigration to me.

And then we turn to the environment. Rather echoing the Met Office research I highlighted recently, 45% view climate change as the most important environmental issue, compared with only 19% in 2010, with 40% of the population very concerned about the environment, compared with 22% in 2010.

Which brings us to two climate stories in quick succession.

The first was yesterday, when the Committee for Climate Change, appointed to assess the Government’s progress against its own commitments on climate change, gave its 2023 report to Parliament on England’s progress in building climate resilience across the economy – and the extent of policies and delivery to meet them. It was not a positive assessment.

Source: https://www.theccc.org.uk/publication/progress-in-adapting-to-climate-change-2023-report-to-parliament/

What they found was:

There is a striking lack of climate preparation from Government:

  • Policies and plans. Despite some evidence of improved sectoral planning by Government for key climate risks, ‘fully credible’ planning for climate change – where nearly all required policy milestones are in place – is only found for five of the 45 adaptation outcomes examined in this report.
  • Delivery and implementation. In none of the 45 adaptation outcomes was their sufficient evidence that reductions in climate exposure and vulnerability are happening at the rates required to manage risks appropriately. For around one-quarter of outcomes, available indicators show insufficient evidence of progress.

Baroness Brown, Chair of the Adaptation Committee, went further:

The Government’s lack of urgency on climate resilience is in sharp contrast to the recent experience of people in this country. People, nature and infrastructure face damaging impacts as climate change takes hold. These impacts will only intensify in the coming decades.

This has been a lost decade in preparing for and adapting to the known risks that we face from climate change. Each month that passes without action locks in more damaging impacts and threatens the delivery of other key Government objectives, including Net Zero. We have laid out a clear path for Government to improve the country’s climate resilience. They must step up.

By coincidence, today is the Government’s Energy Security Day, backed by a report called Powering Up Britain. This follows a High Court ruling last October which found that, when they signed off their carbon strategy, they didn’t have the legally required information on how carbon budgets would be met. The article went on to say:

Ten million tonnes of carbon could be illegally unleashed in the mid-2030s as a result. Doubt was also shed on the 95 per cent of the sixth carbon budget that was accounted for in the government’s estimates.

Mr Justice Holgate also ruled that the strategy breached the Climate Change Act by failing to provide enough detail on the emissions savings, leaving parliament and the public in the dark.

Originally called Green Day, but presumably dropped after Jeremy Hunt’s comments about not wanting to be an American Idiot, the Energy Security Day has highlighted the following Government priorities:

  1. Energy security: setting the UK on a path to greater energy independence.
  2. Consumer security: bringing bills down, and keeping them affordable, and making
    wholesale electricity prices among the cheapest in Europe.
  3. Climate security: supporting industry to move away from expensive and dirty fossil
    fuels.
  4. Economic security: playing our part in reducing inflation and boosting growth,
    delivering high skilled jobs for the future.

Further analysis at this stage has not been made easy by the way that the Government has released details. Chris Stark, the Chief Executive of the Committee for Climate Change has described it on Twitter as “government by press release”, ie

The government now adopts this communications strategy regularly: press release the night before – published documents later. It gives them two bites of the press coverage.

But it makes it hard for a statutory organisation like @theCCCuk, with legal duties, to comment.

Others have been less constrained in their response. The main criticisms are that many of the policies presented in the report have been announced previously, that there is no significant increase in support for home insulation and that the focus on carbon capture and storage (CCS) is out of all proportion given the long-standing difficulties of scaling up the technology.

The BBC quote Bob Ward, policy director at the Grantham Research Institute on Climate Change at LSE:

What does not make sense is to carry on with further development of new fossil fuel reserves on the assumption CCS will be available to mop up all the additional emissions.

I had an initial skim of the report looking for what was planned for heat pumps, which regular readers of this blog will know I have some history with. I found this:

The Government has an ambition to phase out all new and replacement natural gas boilers by 2035 at the latest and will further consider the recommendation from the Independent Review of Net Zero in relation to this. People’s homes will be heated by British electricity, not imported gas. The Heat Pump Investment Accelerator will mean heat pumps are manufactured in the UK at a scale never seen before. We want to make it as cheap to buy and run a heat pump as a gas boiler by extending the Boiler Upgrade Scheme by three years, and by rebalancing the costs of electricity and gas.

So reading between the hype, they are going to invest £30 million in heat pump manufacture in the UK, which they claim will attract £270 million of “private investment into manufacturing and associated supply chains”.

The other parts are:

  • Committing to extending the £5,000 grant for another three years (which is less than the difference between the cost of installing a heat pump and a gas boiler currently in many cases, although this may change if schemes like the recently announced Octopus pilot become more widely adopted).
  • The “Clean Heat Market Mechanism” which is supposed to encourage the installation of low carbon heating appliances.
  • A consultation to shift green levies off electricity and on to gas bills.

The country is changing fast. The Government needs to be more transformational than this to keep up. Or, in Baroness Brown’s words, step up!

Source: Metro Graphics

Pay disputes are starting to be settled or partially settled in places:

  • 1,800 bus drivers employed by Abellio in London will now receive a 18% increase.
  • The Welsh Government made a fresh offer to health unions on 3 February which led to a suspension of all health strikes in Wales bar ambulance workers from the Unite union while negotiations continue.
  • On Friday the TSSA union (17,000 members) announced that members are to be given a vote on offers from the train companies in their long-running national dispute over pay, job security and conditions.

However:

  • Ambulance workers, teachers and university staff are amongst those striking over the next 3 weeks.
  • The very much larger union, the RMT (82,000 members), have rejected the train companies’ deal (9% over 2 years) due to the additional conditions attached affecting safety on the railways.
  • The Scottish Government is in talks with the Royal College of Nursing (RCN) and other unions representing NHS staff over a pay settlement for 2023-24, after imposing a pay deal which would give health workers an average 7.5% rise in December, which RCN nurses rejected.
  • Nurses from A&E, intensive care and cancer wards could join fresh strikes in England, as the RCN considers a continuous 48-hour strike, which could begin in weeks.

According to Reuters, a recent Chartered Institute of Personnel Development (CIPD) survey indicates that the gap between public and private employers’ wage expectations has widened. Planned pay settlements in the public sector fell to 2% from 3% in the quarter before, compared to a median of 5% in the private sector.

Meanwhile the UCU and the four other higher education unions (EIS, GMB, UNISON and Unite) and employer representatives from the Universities and Colleges Employers Association (UCEA) have agreed to further talks mediated by conciliation service Acas. The discussions began yesterday and continue today, covering pay, equality, job insecurity and workloads.

The strike continues today for three consecutive days. In total 18 days of strike action are planned throughout February and March, with a new strike ballot planned for March.

It seems fairly clear that public sector employers need to offer rather more than they have to date if any of these disputes are going to be resolved any time soon.