New (left) and old (right) Naiku shrines during the 60th sengu at Ise Jingu, 1973, via Bock 1974

In his excellent new book, Breakneck, Dan Wang tells the story of the high-speed rail links which started to be constructed in 2008 between San Francisco and Los Angeles and between Beijing and Shanghai respectively. Both routes would be around 800 miles long when finished. The Beijing-Shanghai line opened in 2011 at a cost of $36 billion. To date, California has built only a small stretch of their line, as yet nowhere near either Los Angeles or San Francisco, and the latest estimate of the completed bill is $128 billion. Wang uses this, amongst other examples to draw a distinction between the engineering state of China “building big at breakneck speed” and the lawyerly society of the United States “blocking everything it can, good and bad”.

Europe doesn’t get much of a mention, other than to be described as a “mausoleum”, which sounds rather JD Vance and there is quite a lot about this book that I disagree with strongly, which I will return to. However there is also much to agree with in this book, and none more so than when Wang talks about process knowledge.

Wang tells another story, of Ise Jingu in Japan. Every 20 years exact copies of Naiku, Geku, and 14 other shrines here are built on vacant adjacent sites, after which the old shrines are demolished. Altogether 65 buildings, bridges, fences, and other structures are rebuilt this way. They were first built in 690. In 2033, they will be rebuilt for the 63rd time. The structures are built each time with the original 7th century techniques which involve no nails, just dowels and wood joints. Staff have a 200 year tree planting plan to ensure enough cypress trees are planted to make the surrounding forest self-sufficient. The 20 year intervals between rebuilding are the length of the generations, the older passing on the techniques to the younger.

This, rather like the oral tradition of folk stories and songs, which were passed on by each generation as contemporary narratives until they were all written down and fixed in time so that they quickly appeared old-fashioned thereafter, is an extreme example of process knowledge. What is being preserved is not the Trigger’s Broom of temples at Ise Jingu, but the practical knowledge of how to rebuild them as they were originally built.

Trigger’s Broom. Source: https://www.youtube.com/watch?v=BUl6PooveJE

Process knowledge is the know-how of your experienced workforce that cannot easily be written down. It can develop where such a workforce work closely with researchers and engineers to create feedback loops which can also accelerate innovation. Wang contrasts Shenzhen in China where such a community exists, with Silicon Valley where it doesn’t, forcing the United States to have such technological wonders as the iPhone manufactured in China.

What happens when you don’t have process knowledge? Well one example would be our nuclear industry, where lack of experience of pressurised water reactors has slowed down the development of new power stations and required us to rely considerably on French expertise. There are many other technical skill shortages.

China has recognised the supreme importance of process knowledge as compared to the American concern with intellectual property (IP). IP can of course be bought and sold as a commodity and owned as capital, whereas process knowledge tends to rest within a skilled workforce.

This may then be the path to resilience for the skilled workers of the future in the face of the AI-ification of their professions. Companies are being sold AI systems for many things at the moment, some of which will clearly not work with few enough errors, or without so much “human validation” (a lovely phrase a good friend of mine actively involved in integrating AI systems into his manufacturing processes used recently) that they are not deemed practical. For early career workers entering these fields the demonstration of appropriate process knowledge, or the ability to develop it very quickly, may be the key to surviving the AI roller coaster they face over the next few years. Actionable skills and knowledge which allow them to manage such systems rather than being managed by them. To be a centaur rather than a reverse-centaur.

Not only will such skills make you less likely to lose your job to an AI system, they will also increase your value on the employment market: the harder these skills and knowledge are to acquire, the more valuable they are likely to be. But whereas in the past, in a more static market, merely passing your exams and learning coding might have been enough for an actuarial student for instance, the dynamic situation which sees everything that can be written down disappearing into prompts in some AI system will make such roles unprotected.

Instead it will be the knowledge about how people are likely to respond to what you say in a meeting or write in an email or report, and the skill to strategise around those things, knowing what to do when the rules run out, when situations are genuinely novel, ie putting yourself in someone else’s shoes and being prepared to make judgements. It will be the knowledge about what matters in a body of data, putting the pieces together in meaningful ways, and the skills to make that obvious to your audience. It will be the knowledge about what makes everyone in your team tick and the skills to use that knowledge to motivate them to do their best work. It will ultimately be about maintaining independent thought: the knowledge of why you are where you are and the skill to recognise what you can do for the people around you.

These have not always been seen as entry level skills and knowledge for graduates, but they are increasingly going to need to be as the requirement grows to plug you in further up an organisation if at all as that organisation pursues its diamond strategy or something similar. And alongside all this you will need a continuing professional self-development programme on steroids going on to fully understand the systems you are working with as quickly as possible and then understand them all over again when they get updated, demanding evidence and transparency and maintaining appropriate uncertainty when certainty would be more comfortable for the people around you, so that you can manage these systems into the areas where they can actually add value and out of the areas where they can cause devastation. It will be more challenging than transmitting the knowledge to build a temple out of hay and wood 20 years into the future, and will be continuous. Think of it as the Trigger’s Broom Process of Career Management if you like.

These will be essential roles for our economic future: to save these organisations from both themselves and their very expensive systems. It will be both enthralling and rewarding for those up to the challenge.

Wallace & Gromit: Vengeance Most Fowl models on display in Bristol. This file is licensed under the Creative Commons Attribution-Share Alike 4.0 International license.

I have been watching Daniel Susskind’s lectures on AI and the future of work this week: Automation Anxiety was delivered in September and The Economics of Work and Technology earlier this week. The next in the series, entitled Economics and Artificial Intelligence is scheduled for 13 January. They are all free and I highly recommend them for their great range of source material presented.

In my view the most telling graph, which featured in both lectures, was this one:

Original Source: Daniel Susskind A World Without Work

Susskind extended the usual concept of the ratio between average college and university graduate salaries to those of school leavers to include the equivalent ratio of craftsmen to labourers which then gives us data back to 1220. There are two big collapses in this ratio in the data: that following the Black Death (1346-1353), which may have killed 50% of Europe’s 14th century population, and the Industrial Revolution (which slow singularity started around 1760 and then took us through the horrors of the First World War and the Great Depression before the graph finally picks up post Bretton Woods).

As Susskind shows, the profits from the Industrial Revolution were not going to workers:

Source: The Technology Trap, Carl Benedikt Frey

So how is the AI Rush comparing? Well Susskind shared another graph:

Source: David Autor Work of the Past, Work of the future

This, from 2019, introduced the idea that the picture is now more complex than high-skilled and low-skilled workers, now there is a middle. And, as Autor has set out more recently, the middle is getting squeezed:

Key dynamics at play include:

  • Labor Share Decline: OECD data reveal a 3–5 percentage point drop in labor’s share of income in sectors most exposed to AI, a trend likely to accelerate as automation deepens.
  • Wage Polarization: The labor market is bifurcating. On one end, high-complexity “sense-making” roles; on the other, low-skill service jobs. The middle is squeezed, amplifying both political risk and regulatory scrutiny.
  • Productivity Paradox 2.0: Despite the promise of AI-driven efficiency, productivity gains remain elusive. The real challenge is not layering chatbots atop legacy processes, but re-architecting workflows from the ground up—a costly and complex endeavor.

For enterprise leaders, the implications are profound. AI is best understood not as a job destroyer, but as a “skill-lowering” platform. It enables internal labor arbitrage, shifting work toward judgment-intensive, context-rich tasks while automating the rest. The risk is not just technological—it is deeply human. Skill depreciation now sits alongside cyber and climate risk on the board agenda, demanding rigorous workforce-reskilling strategies and a keen eye on brand equity as a form of social license.

So, even if the overall number of jobs may not be reduced, the case being made is that the average skill level required to carry them out will be. As Susskind said, the Luddites may have been wrong about the spinning jenny replacing jobs, but it did replace and transform tasks and its impact on workers was to reduce their pay, quality of work, status as craftsmen and economic power. This looks like the threat being made by employers once again, with real UK wages already still only at the level they were at in 2008:

However this is where I part company with Susskind’s presentation, which has an implicit inevitability to it. The message is that these are economic forces we can’t fight against. When he discusses whether the substituting force (where AI replaces you) or the complementing force (where AI helps you to be more productive and increases the demand for your work) will be greater, it is almost as if we have no part to play in this. There is some cognitive dissonance when he quotes Blake, Engels, Marx and Ruskin about the horrors of living through such times, but on the whole it is presented as just a natural historical process that the whole of the profits from the massive increases in productivity of the Industrial Revolution should have ended up in the pockets of the fat guys in waistcoats:

Richard Arkwright, Sir Robert Peel, John Wilkinson and Josiah Wedgwood

I was recently at Cragside in Northumberland, where the arms inventor and dealer William Armstrong used the immense amount of money he made from selling big guns (as well as big cranes and the hydraulic mechanism which powers Tower Bridge) to decking out his house and grounds with the five artificial lakes required to power the world’s first hydro-electric lighting system. His 300 staff ran around, like good reverse-centaurs, trying to keep his various inventions from passenger lifts to an automated spit roast from breaking down, so that he could impress his long list of guests and potential clients to Cragside, from the Shah of Persia to the King of Siam and two future Prime Ministers of Japan. He made sure they were kept running around with a series of clock chimes throughout the day:

However, with some poetic irony, the “estate regulator” is what has since brought the entire mechanism crashing to a halt:

Which brings me to Wallace and Gromit. Wallace is the inventor, heedless of the impact of his inventions on those around him and especially on his closest friend Gromit, who he regularly dumps when he becomes inconvenient to his plans. Gromit just tries to keep everything working.

Wallace is a cheese-eating monster who cannot be assessed purely on the basis of his inventions. And neither can Armstrong, Arkwright, Peel, Wilkinson or Wedgwood. We are in the process of allowing a similar domination of our affairs by our new monsters:

Meta CEO Mark Zuckerberg beside Amazon CEO Jeff Bezos and his fiancée (now wife) Lauren, Google CEO Sundar Pichai and Elon Musk at President Trump’s 2nd Inauguration.

Around half an hour into his second lecture, Daniel Susskind started talking about pies. This is the GDP pie (Susskind has also written a recent book on Growth: A Reckoning, which argues that GDP growth can go on forever – my view would be closer to the critique here from Steve Keen) which, as Susskind says, increased by a factor of 113 in the UK between 1700 and 2000. But, as Steve Keen says:

The statistics strongly support Jevons’ perspective that energy—and specifically, energy from coal—caused rising living standards in the UK (see Figure 2). Coal, and not a hypothesised change in culture, propelled the rise in living standards that Susskind attributes to intangible ideas.

Source: https://www.themintmagazine.com/growth-some-inconvenient-truths/

Susskind talks about the productivity effect, he talks about the bigger pie effect and then he talks about the changing pie effect (ie changes to the types of work we do – think of the changes in the CPI basket of goods and services) as ways in which jobs are created by technological change. However he has nothing to say about just giving less of the pie to the monsters. Instead for Susskind the AI Rush is all about clever people throwing 10 times the amount of money at AI as was directed at the Manhattan Project and the heads of OpenAI, Anthropic and Google DeepMind stating that AI will replace humans in all economically useful tasks in 10 years, a claim which he says we should take seriously. Cory Doctorow, amongst others, disagrees. In his latest piece, When AI prophecy fails, he has this to say about why companies have reduced recruitment despite the underperformance of AI systems to date:

All this can feel improbable. Would bosses really fire workers on the promise of eventual AI replacements, leaving themselves with big bills for AI and falling revenues as the absence of those workers is felt?

The answer is a resounding yes. The AI industry has done such a good job of convincing bosses that AI can do their workers’ jobs that each boss for whom AI fails assumes that they’ve done something wrong. This is a familiar dynamic in con-jobs.

The Industrial Revolution had a distribution problem which gave birth to Chartism, Marxism, the Trades Union movement and the Labour Party in the UK alone. And all of that activity only very slowly chipped away at the wealth share of the top 10%:

Source: https://equalitytrust.org.uk/scale-economic-inequality-uk/

However the monsters of the Industrial Revoution did at least have solid proof that they could deliver what they promised. You don’t get more concrete a proof of concept than this after all:

View on the Thames and the opening Tower Bridge, London, from the terraces at Wapping High Street, at sunset in July 2013, Bert Seghers. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.

The AI Rush has a similar distribution problem, but it is also the first industrial revolution since the global finance industry decoupled from the global real economy. So the wealth share of the Top 10% isn’t going back up fast enough? No problem. Just redistribute the money at the top even further up:

What the monsters of the AI Rush lack is anything tangible to support their increasingly ambitious assertions. Wallace may be full of shit. And the rest of us can all just play a Gromit-like support role until we find out one way or the other or concentrate on what builds resilient communities instead.

Whether you think the claims for the potential of AI are exaggerated; or that the giant bet on it that the US stock market has made will end in an enormous depression; or that the energy demands of this developing technology will be its constraining force ultimately; or that we are all just making the world a colder place by prioritising systems, however capable, over people: take your pick as a reason to push back against the AI Rush. But my bet would be on the next 10 years not being dominated by breathless commentary on the exploits of Tech Bros.

To Generation Z: a message of support from a Boomer

So you’ve worked your way through school and now university, developing the skills you were told would always be in high demand, credentialising yourself as a protection against the vagaries of the global economy. You may have serious doubts about ever being able to afford a house of your own, particularly if your area of work is very concentrated in London…

…and you resent the additional tax that your generation pays to support higher education:

Source: https://taxpolicy.org.uk/2023/09/24/70percent/

But you still had belief in being able to operate successfully within the graduate market.

A rational functional graduate job market should be assessing your skills and competencies against the desired attributes of those currently performing the role and making selections accordingly. That is a system both the companies and graduates can plan for.

It is very different from a Rush. The first phenomenon known as a Rush was the Californian Gold Rush of 1848-55. However the capitalist phenomenon of transforming an area to facilitate intensive production probably dates from sugar production in Madeira in the 15th century. There have been many since, but all neatly described by this Punch cartoon from 1849:

A Rush is a big deal. The Californian Gold Rush resulted in the creation of California, now the 5th largest economy in the world. But when it comes to employment, a Rush is not like an orderly jobs market. As Carlo Iacono describes, in an excellent article on the characteristics of the current AI Rush:

The railway mania of the 1840s bankrupted thousands of investors and destroyed hundreds of companies. It also left Britain with a national rail network that powered a century of industrial dominance. The fibre-optic boom of the late 1990s wiped out about $5 trillion in market value across the broader dot-com crash. It also wired the world for the internet age.

A Rush is a difficult and unpredictable place to build a career, with a lot riding on dumb luck as much as any personal characteristics you might have. There is very little you can count on in a Rush. This one is even less predictable because as Carlo also points out:

When the railway bubble burst in the 1840s, the steel tracks remained. When the fibre-optic bubble burst in 2001, the “dark fibre” buried in the ground was still there, ready to carry traffic for decades. These crashes were painful, but they left behind durable infrastructure that society could repurpose.

Whereas the 40–60% of US real GDP growth in the first half of 2025 explained by investment in AI infrastructure isn’t like that:

The core assets are GPUs with short economic half-lives: in practice, they’re depreciated over ~3–5 years, and architectures are turning over faster (Hopper to Blackwell in roughly two years). Data centres filled with current-generation chips aren’t valuable, salvageable infrastructure when the bubble bursts. They’re warehouses full of rapidly depreciating silicon.

So today’s graduates are certainly going to need resilience, but that’s just what their future employers are requiring of them. They also need to build their own support structures which are going to see them through the massive disruption which is coming whether or not the enormous bet on AI is successful or not. The battle to be centaurs, rather than reverse-centaurs, as I set out in my last post (or as Carlo Iacono describes beautifully in his discussion of the legacy of the Luddites here), requires these alliances. To stop thinking of yourselves as being in competition with each other and start thinking of yourselves as being in competition for resources with my generation.

I remember when I first realised my generation (late Boomer, just before Generation X) was now making the weather. I had just sat a 304 Pensions and Other Benefits actuarial exam in London (now SP4 – unsuccessfully as it turned out), and nipped in to a matinee of Sam Mendes’ American Beauty and watched the plastic bag scene. I was 37 at the time.

My feeling is that despite our increasingly strident efforts to do so, our generation is now deservedly losing power and is trying to hang on by making reverse centaurs of your generation as a last ditch attempt to remain in control. It is like the scene in another movie, Triangle of Sadness, where the elite are swept onto a desert island and expect the servant who is the only one with survival skills in such an environment to carry on being their servant.

Don’t fall for it. My advice to young professionals is pretty much the same as it was to actuarial students last year on the launch of chartered actuary status:

If you are planning to join a profession to make a positive difference in the world, and that is in my view the best reason to do so, then you are going to have to shake a few things up along the way.

Perhaps there is a type of business you think the world is crying out for but it doesn’t know it yet because it doesn’t exist. Start one.

Perhaps there is an obvious skill set to run alongside your professional one which most of your fellow professionals haven’t realised would turbo-charge the effectiveness of both. Acquire it.

Perhaps your company has a client who noone has taken the time to put themselves in their shoes and communicate in a way they will properly understand and value. Be that person.

Or perhaps there are existing businesses who are struggling to manage their way in changing markets and need someone who can make sense of the data which is telling them this. Be that person.

All why remaining grounded in which ever community you have chosen for yourself. Be the member of your organisation or community who makes it better by being there.

None of these are reverse centaur positions. Don’t settle for anything less. This is your time.

In 2017, I was rather excitedly reporting about ideas which were new to me at the time regarding how technology or, as Richard and Daniel Susskind referred to it in The Future of the Professions, “increasingly capable machines” were going to affect professional work. I concluded that piece as follows:

The actuarial profession and the higher education sector therefore need each other. We need to develop actuaries of the future coming into your firms to have:

  • great team working skills
  • highly developed presentation skills, both in writing and in speech
  • strong IT skills
  • clarity about why they are there and the desire to use their skills to solve problems

All within a system which is possible to regulate in a meaningful way. Developing such people for the actuarial profession will need to be a priority in the next few years.

While all of those things are clearly still needed, it is becoming increasingly clear to me now that they will not be enough to secure a job as industry leaders double down.

Source: https://www.ft.com/content/99b6acb7-a079-4f57-a7bd-8317c1fbb728

And perhaps even worse than the threat of not getting a job immediately following graduation is the threat of becoming a reverse-centaur. As Cory Doctorow explains the term:

A centaur is a human being who is assisted by a machine that does some onerous task (like transcribing 40 hours of podcasts). A reverse-centaur is a machine that is assisted by a human being, who is expected to work at the machine’s pace.

We have known about reverse-centaurs since at least Charlie Chaplin’s Modern Times in 1936.

By Charlie Chaplin – YouTube, Public Domain, https://commons.wikimedia.org/w/index.php?curid=68516472

Think Amazon driver or worker in a fulfillment centre, sure, but now also think of highly competitive and well-paid but still ultimately human-in-the-loop kinds of roles being responsible for AI systems designed to produce output where errors are hard to spot and therefore to stop. In the latter role you are the human scapegoat, in the phrasing of Dan Davies, “an accountability sink” or in that of Madeleine Clare Elish, a “moral crumple zone” all rolled into one. This is not where you want to be as an early career professional.

So how to avoid this outcome? Well obviously if you have other options to roles where a reverse-centaur situation is unavoidable you should take them. Questions to ask at interview to identify whether the role is irretrievably reverse-centauresque would be of the following sort:

  1. How big a team would I be working in? (This might not identify a reverse-centaur role on its own: you might be one of a bank of reverse-centaurs all working in parallel and identified “as a team” while in reality having little interaction with each other).
  2. What would a typical day be in the role? This should smoke it out unless the smokescreen they put up obscures it. If you don’t understand the first answer, follow up to get specifics.
  3. Who would I report to? Get to meet them if possible. Establish whether they are technical expert in the field you will be working in. If they aren’t, that means you are!
  4. Speak to someone who has previously held the role if possible. Although bear in mind that, if it is a true reverse-centaur role and their progress to an actual centaur role is contingent on you taking this one, they may not be completely forthcoming about all of the details.

If you have been successful in a highly competitive recruitment process, you may have a little bit of leverage before you sign the contract, so if there are aspects which you think still need clarifying, then that is the time to do so. If you recognise some reverse-centauresque elements from your questioning above, but you think the company may be amenable, then negotiate. Once you are in, you will understand a lot more about the nature of the role of course, but without threatening to leave (which is as damaging to you as an early career professional as it is to them) you may have limited negotiation options at that stage.

In order to do this successfully, self knowledge will be key. It is that point from 2017:

  • clarity about why they are there and the desire to use their skills to solve problems

To that word skills I would now add “capabilities” in the sense used in a wonderful essay on this subject by Carlo Iacono called Teach Judgement, Not Prompts.

You still need the skills. So, for example, if you are going into roles where AI systems are producing code, you need to have sufficiently good coding skills yourself to create a programme to check code written by the AI system. If the AI system is producing communications, your own communication skills need to go beyond producing work that communicates to an audience effectively to the next level where you understand what it is about your own communication that achieves that, what is necessary, what is unnecessary, what gets in the way of effective communication, ie all of the things that the AI system is likely to get wrong. Then you have a template against which to assess the output from an AI system, and for designing better prompts.

However specific skills and tools come and go, so you need to develop something more durable alongside them. Carlo has set out four “capabilities” as follows:

  1. Epistemic rigour, which is being very disciplined about challenging what we actually know in any given situation. You need to be able to spot when AI output is over-confident given the evidence, or when a correlation is presented as causation. What my tutors used to refer to as “hand waving”.
  2. Synthesis is about integrating different perspectives into an overall understanding. Making connections between seemingly unrelated areas is something AI systems are generally less good at than analysis.
  3. Judgement is knowing what to do in a new situation, beyond obvious precedent. You get to develop judgement by making decisions under uncertainty, receiving feedback, and refining your internal models.
  4. Cognitive sovereignty is all about maintaining your independence of thought when considering AI-generated content. Knowing when to accept AI outputs and when not to.

All of these capabilities can be developed with reflective practice, getting feedback and refining your approach. As Carlo says:

These capabilities don’t just help someone work with AI. They make someone worth augmenting in the first place.

In other words, if you can demonstrate these capabilities, companies who themselves are dealing with huge uncertainty about how much value they are getting from their AI systems and what they can safely be used for will find you an attractive and reassuring hire. Then you will be the centaur, using the increasingly capable systems to improve your own and their productivity while remaining in overall control of the process, rather than a reverse-centaur for which none of that is true.

One sure sign that you are straying into reverse-centaur territory is when a disproportionate amount of your time is spent on pattern recognition (eg basing an email/piece of coding/valuation report on an earlier email/piece of coding/valuation report dealing with a similar problem). That approach was always predicated on being able to interact with a more experienced human who understood what was involved in the task at some peer review stage. But it falls apart when there is no human to discuss the earlier piece of work with, because the human no longer works there, or a human didn’t produce the earlier piece of work. The fake it until you make it approach is not going to work in environments like these where you are more likely to fake it until you break it. And pattern recognition is something an AI system will always be able to do much better and faster than you.

Instead, question everything using the capabilities you have developed. If you are going to be put into potentially compromising situations in terms of the responsibilities you are implicitly taking on, the decisions needing to be made and the limitations of the available knowledge and assumptions on which those decisions will need to be based, then this needs to be made explicit, to yourself and the people you are working with. Clarity will help the company which is trying to use these new tools in a responsible way as much as it helps you. Learning is going to be happening for them as much as it is for you here in this new landscape.

And if the company doesn’t want to have these discussions or allow you to hamper the “efficiency” of their processes by trying to regulate them effectively? Then you should leave as soon as you possibly can professionally and certainly before you become their moral crumple zone. No job is worth the loss of your professional reputation at the start of your career – these are the risks companies used to protect their senior people of the future from, and companies that are not doing this are clearly not thinking about the future at all. Which is likely to mean that they won’t have one.

To return to Cory Doctorow:

Science fiction’s superpower isn’t thinking up new technologies – it’s thinking up new social arrangements for technology. What the gadget does is nowhere near as important as who the gadget does it for and who it does it to.

You are going to have to be the generation who works these things out first for these new AI tools. And you will be reshaping the industrial landscape for future generations by doing so.

And the job of the university and further education sectors will increasingly be to equip you with both the skills and the capabilities to manage this process, whatever your course title.

Source: https://pluspng.com/img-png/mixed-economy-png–901.png

Just type “mixed economy graphic” into Google and you will get a lot of diagrams like this one – note that they normally have to pick out the United States for special mention. Notice the big gap between those countries – North Korea, Cuba, China and Russia – and us. It is a political statement masquerading as an economic one.

This same line is used to describe our political options. The Political Compass added an authoritarian/libertarian axis in their 2024 election manifesto analysis but the line from left to right (described as the economic scale) is still there:

Source: https://www.politicalcompass.org/uk2024

So here we are on our political and economic spectrum, where tiny movements between the very clustered Reform, Conservative, Labour and Liberal Democrat positions fill our newspapers and social media comment. The Greens and, presumably if it ever gets off the ground, Your Party are seen as so far away from the cluster that they often get left out of our political discourse. It is an incredibly narrow perspective and we wonder why we are stuck on so many major societal problems.

This is where we have ended up following the “slow singularity” of the Industrial Revolution I talked about in my last post. Our politics coalesced into one gymnasts’ beam, supported by the hastily constructed Late Modern English fashioned for this purpose in the 1800s, along which we have all been dancing ever since, between the market information processors at the “right” end and the bureacratic information processors at the “left” end.

So what does it mean for this arrangement if we suddenly introduce another axis of information processing, ie the large language AI models. I am imagining something like this:

What will this mean for how countries see their economic organisation? What will it mean for our politics?

In 1884, the English theologian, Anglican priest and schoolmaster Edwin Abbott Abbott published a satirical science fiction novella called Flatland: A Romance of Many Dimensions. Abbott’s satire was about the rigidity of Victorian society, depicted as a two-dimensional world inhabited by geometric figures: women are line segments, while men are polygons with various numbers of sides. We are told the story from the viewpoint of a square, which denotes a gentleman or professional. In this world three-dimensional shapes are clearly incomprehensible, with every attempt to introduce new ideas from this extra dimension considered dangerous. Flatland is not prepared to receive “revelations from another world”, as it describes anything existing in the third dimension, which is invisible to them.

The book was not particularly well received and fell into obscurity until it was embraced by mathematicians and physicists in the early 20th century as the concept of spacetime was being developed by Poincaré, Einstein and Minkowski amongst others. And what now looks like a prophetic analysis of the limitations of the gymnasts’ beam economic and political model of the slow singularity has continued to not catch on at all.

However, much as with Brewster’s Millions, the incidence of film adaptations of Flatland give some indication of when it has come back as an idea to some extent. This tells us that it wasn’t until 1965 until someone thought it was a good idea to make a movie of Flatland and then noone else attempted it until an Italian stop-motion film in 1982. There were then two attempts in 2007, which I can’t help but think of as a comment on the developing financial crisis at the time, and a sequel based on Bolland : een roman van gekromde ruimten en uitdijend heelal (which translates as: Sphereland: A Fantasy About Curved Spaces and an Expanding Universe), a 1957 sequel to Flatland in Dutch (which didn’t get translated into English until 1965 when the first animated film came out) by Dionys Burger, in 2012.

So here we are, with a new approach to processing information and language to sit alongside the established processors of the last 200 years or more. Will it perhaps finally be time to abandon Flatland? And if we do, will it solve any of our problems or just create new ones?

In 2017 I posted an article about how the future for actuaries was starting to look, with particular reference to a Society of Actuaries paper by Dodzi Attimu and Bryon Robidoux, which has since been moved to here.

I summarised their paper as follows at the time:

Focusing on…a paper produced by Dodzi Attimu and Bryon Robidoux for the Society of Actuaries in July 2016 explored the theme of robo actuaries, by which they meant software that can perform the role of an actuary. They went on to elaborate as follows:

Though many actuaries would agree certain tasks can and should be automated, we are talking about more than that here. We mean a software system that can more or less autonomously perform the following activities: develop products, set assumptions, build models based on product and general risk specifications, develop and recommend investment and hedging strategies, generate memos to senior management, etc.

They then went on to define a robo actuarial analyst as:

A system that has limited cognitive abilities but can undertake specialized activities, e.g. perform the heavy lifting in model building (once the specification/configuration is created), perform portfolio optimization, generate reports including narratives (e.g. memos) based on data analysis, etc. When it comes to introducing AI to the actuarial profession, we believe the robo actuarial analyst would constitute the first wave and the robo actuary the second wave.

They estimate that the first wave is 5 to 10 years away and the second 15 to 20 years away. We have been warned.

So 9 years on from their paper, how are things looking? Well the robo actuarial analyst wave certainly seems to be pretty much here, particularly now that large language models like ChatGPT are being increasingly used to generate reports. It suddenly looks a lot less fanciful to assume that the full robo actuary is less than 11 years away.

But now the debate on AI appears to be shifting to an argument between whether we are heading for Vernor Vinge’s “Singularity” where the increasingly capable systems

would not be humankind’s “tool” — any more than humans are the tools of rabbits or robins or chimpanzees

on the one hand, and, on the other, the idea that “it is going to take a long time for us to really use AI properly…, because of how hard it is to regear processes and organizations around new tech”.

In his article on Understanding AI as a social technology, Henry Farrell suggests that neither of these positions allow a proper understanding of the impact AI is likely to have, instead proposing the really interesting idea that we are already part way through a “slow singularity”, which began with the industrial revolution. As he puts it:

Under this understanding, great technological changes and great social changes are inseparable from each other. The reason why implementing normal technology is that so slow is that it requires sometimes profound social and economic transformations, and involves enormous political struggle over which kinds of transformation ought happen, which ought not, and to whose benefit.

This chimes with what I was saying recently about AI possibly not being the best place to look for the next industrial revolution. Farrell plausibly describes the current period using the words of Herbert Simon. As Farrell says: “Human beings have quite limited internal ability to process information, and confront an unpredictable and complex world. Hence, they rely on a variety of external arrangements that do much of their information processing for them.” So Simon says of markets, for instance, which:

appear to conserve information and calculation by assigning decisions to actors who can make them on the basis of information that is available to them locally – that is, without knowing much about the rest of the economy apart from the prices and properties of the goods they are purchasing and the costs of the goods they are producing.

And bureaucracies and business organisations, similarly:

like markets, are vast distributed computers whose decision processes are substantially decentralized. … [although none] of the theories of optimality in resource allocation that are provable for ideal competitive markets can be proved for hierarchy, … this does not mean that real organizations operate inefficiently as compared to real markets. … Uncertainty often persuades social systems to use hierarchy rather than markets in making decisions.

Large language models by this analysis are then just another form of complex information processing, “likely to reshape the ways in which human beings construct shared knowledge and act upon it, with their own particular advantages and disadvantages. However, they act on different kinds of knowledge than markets and hierarchies”. As an Economist article Farrell co-wrote with Cosma Shalizi says:

We now have a technology that does for written and pictured culture what largescale markets do for the economy, what large-scale bureaucracy does for society, and perhaps even comparable with what print once did for language. What happens next?

Some suggestions follow and I strongly recommend you read the whole thing. However, if we return to what I and others were saying in 2016 and 2017, it may be that we were asking the wrong question. Perhaps the big changes of behaviour required of us to operate as economic beings have already happened (the start of the “slow singularity” of the industrial revolution) and the removal of alternatives that required us to spend increasing proportions of our time within and interacting with bureacracies and other large organisations were the logical appendage to that process. These processes are merely becoming more advanced rather than changing fundamentally in form.

And the third part, ie language? What started with the emergence of Late Modern English in the 1800s looks like it is now being accelerated via a new way of complex information processing applied to written, pictured (and I would say also heard) culture.

So the future then becomes something not driven by technology, but by our decisions about which processes we want to allow or even encourage and which we don’t, whether those are market processes, organisational processes or large language processes. We don’t have to have robo actuaries or even robo actuarial analysts, but we do have to make some decisions.

And students entering this arena need to prepare themselves to be participants in those decisions rather than just victims of them. A subject I will be returning to.

I have just been reading Adrian Tchaikovsky’s Service Model. I am sure I will think about it often for years to come.

Imagine a world where “Everything was piles. Piles of bricks and shattered lumps of concrete and twisted rods of rebar. Enough fine-ground fragments of glass to make a whole razory beach. Shards of fragmented plastic like tiny blunted knives. A pall of ashen dust. And, to this very throne of entropy, someone had brought more junk.”

This is Earth outside a few remaining enclaves. And all served by robots, millions of robots.

Robots: like our protagonist (although he would firmly resist such a designation) Uncharles, who has been programmed to be a valet, or gentleman’s gentlerobot; or librarians tasked with preserving as much data from destruction or unauthorised editing as possible; or robots preventing truancy from the Conservation Farm Project where some of the few remaining humans are conscripted to reenact human life before robots; or the fix-it robots; or the warrior robots prosecuting endless wars.

Uncharles, after slitting the throat of his human master for no reason that he can discern, travels this landscape with his hard-to-define-and-impossible to-shut-up companion The Wonk, who is very good at getting into places but often not so good at extracting herself. Until they finally arrive in God’s waiting room and take a number.

Along the way The Wonk attempts to get Uncharles to accept that he has been infected with a Protagonist Virus, which has given Uncharles free will. And Uncharles finds his prognosis routines increasingly unhelpful to him as he struggles to square the world he is perambulating with the internal model of it he carries inside him.

The questions that bounce back between our two unauthorised heroes are many and various, but revolve around:

  1. Is there meaning beyond completing your task list or fulfilling the function for which you were programmed?
  2. What is the purpose of a gentleman’s gentlerobot when there are no gentlemen left?
  3. Is the appearance of emotion in some of Uncharles’ actions and communications really just an increasingly desperate attempt to reduce inefficient levels of processing time? Or is the Protagonist Virus an actual thing?

Ultimately the question is: what is it all for? And when they finally arrive in front of God, the question is thrown back at us, the pile of dead humans rotting across the landscape of all our trash.

This got me thinking about a few things in a different way. One of these was AI.

Suppose AI is half as useful as OpenAI and others are telling us it will be. Suppose that we can do all of these tasks in less than half the time. How is all of that extra time going to be distributed? In 1930 Keynes speculated that his grandchildren would only need to work a 15 hour week. And all of the productivity improvements he assumed in doing so have happened. Yes still full-time work remains the aspiration.

There certainly seems to have been a change of attitude from around 1980 onwards, with those who could choose choosing to work longer, for various reasons which economists are still arguing about, and therefore the hours lost were from those who couldn’t choose, as The Resolution Foundation have pointed out. Unfortunately neither their pay, nor their quality of work, have increased sufficiently for those hours to meet their needs.

So, rather than asking where the hours have gone, it probably makes more sense to ask where the money has gone. And I think we all know the answer to that one.

When Uncharles and The Wonk finally get in to see God, God gives an example of a seat designed to stop vagrants sleeping on it as the indication it needed of the kind of society humans wanted. One where the rich wanted not to have to see or think about the poor. Replacing all human contact with eternally indefatigable and keen-to-serve robots was the world that resulted.

Look at us clever humans, constantly dreaming of ways to increase our efficiency, remove inefficient human interaction, or indeed any interaction which cannot be predicted in advance. Uncharles’ seemingly emotional responses, when he rises above the sea of task-queue-clutching robots all around him, are to what he sees as inefficiency. But what should be the goal? Increasing GDP can’t be it, that is just another means. We are currently working extremely hard and using a huge proportion of news and political affairs airtime and focus on turning the English Channel into the seaborne equivalent of the seat where vagrants and/or migrants cannot rest.

So what should be the goal? Because the reason Service Model will stay with me for some time to come is that it shows us what happens if we don’t have one. The means take over. It seems appropriate to leave the last word to a robot.

“Justice is a human-made thing that means what humans wish it to mean and does not exist at all if humans do not make it,” Uncharles says at one point. “I suggest that ‘kind and ordered’ is a better goal.”

The 1960s version of The Magnificent Seven (itself a remake of Kurosawa’s Seven Samurai) before most of them were shot dead

In my last post, I suggested that there appeared to be a campaign to impugn the character of the younger generation as cover for reducing graduate recruitment, partly because of the desire to make AI systems of various sorts handle a wider and wider range of tasks. However there are other reasons why the value of AI needs to be promoted to the point where if your toaster or fridge is not using a chip they absolutely should be. It is all about the dependence of the US stock market on the so-called Magnificent 7 companies: Alphabet (Google), Apple, Meta (Facebook), Tesla, Amazon, Microsoft and Nvidia whose combined market capitalisation as at 22 July was 31% of the S&P500.

Nvidia? Who are they? They produce silicon chips. As Laura Bratton wrote in May:

As of Nvidia’s 2025 fiscal fourth quarter (the three months ending on Jan. 26 of this year), Bloomberg estimates that Microsoft spends roughly 47% of its capital expenditures directly on Nvidia’s chips and accounts for nearly 19% of Nvidia’s revenue on an annualized basis.

Meanwhile, 25% of Meta’s capital expenditures go to Nvidia and the company accounts for just over 9% of Nvidia’s annual revenue.

Amazon, Alphabet and Tesla are also big customers.

Nvidia is a growth stock, which means that it needs continued growth to support its share price. Once it ceases to be a growth stock then the kind of price earnings ratio it currently enjoys (nudging up to 60, by comparison the price earnings ratio of, say, HSBC is around 17.5) will no longer be acceptable to investors and a large correction in the share price will happen. So a growth slowdown in the Magnificent 7 is big news.

What would prevent a growth slowdown? Well a lot of processing-heavy sales for Facebook, Amazon, Apple and Google primarily. That is why there is now an AI overview of your Google search, why Rufus sits at the bottom of your Amazon search and everything appears to have a voice activated capability which can be accessed via Alexa or Siri these days.

Of course I am not arguing that there are not uses for large language models (LLMs) and other technologies currently wrapped up in the term AI. Seth Godin, usually a first mover in this space, has produced a set of cards with prompts for your LLM that you can tailor for various uses. Many people are seeing how AI applications can cut down the time they spend on everything from diary management to constructing PowerPoint presentations. There is no doubt that use of AI will have changed the way we do some things in a few years’ time. It will not, however, have replaced all of the jobs in Microsoft’s list, from mathematician to geographer to historian to writer. If you want a (much) fuller critique of what is misguided about the AI bubble, I refer you to The Hater’s Guide To The AI Bubble.

There is a lot of rough surrounding a few diamonds and the conditions for a bubble are all there. We know this because we have been here before. On 10 March 2000, the dotcom bubble burst. As Goldman Sachs puts it:

The Nasdaq index rose 86% in 1999 alone, and peaked on March 10, 2000, at 5,048 units. The mega-merger of AOL with TimeWarner seemed to validate investors’ expectations about the “new economy”. Then the bubble imploded. As the value of tech stocks plummeted, cash-strapped internet startups became worthless in months and collapsed. The market for new IPOs froze. On October 4, 2002, the Nasdaq index fell to 1,139.90 units, a fall of 77% from its peak.

Fortune are now claiming that the current AI boom is bigger than the dotcom bubble. And even leading figures in the AI industry admit that it is already a bubble.

This is where it gets interesting. The FT, in its reflection on these parallels, appears to be comforted by the big names involved this time:

To be sure, the parallels are not exact. They never are. While most of the dotcom companies were ephemeral newcomers, the Mag 7 include some of the world’s most profitable and impressive groups including Apple, Amazon and Microsoft, as well as the main supplier to the AI economy, Nvidia.

But of course this is the reason why it’s worse this time. We were able to manage without the “ephemeral newcomers”, although Amazon‘s share price fell by 90% over 2 years and Microsoft lost 60%, so the comparison is not quite true. However these companies were not the foundations of the economy then that they are now.

If Nvidia is the essential supply chain for all the other 6 of the Magnificent 7, then its own supply chain is equally precarious. As Ed Conway’s excellent Material World points out, Nvidia is “fabless” (ie without its own fabrication plant) and relies on Taiwan Semiconductor Manufacturing Company (TSMC) for the manufacture of its processors. They in turn are completely dependent on the company which makes the machines essential to their manufacturing units, ASML. As Conway says:

As of this moment, ASML is the only company in the world capable of making these machines, and TSMC is, alongside Samsung, the only company capable of putting such technology into mass production.

And then there are the raw materials required in these industries. Much has been made, by Diane Coyle and others, of the “weightless” nature of our global economy. Conway demolishes this fairly comprehensively:

In 2019, the latest year of data at the time of writing, we mined, dug and blasted more materials from the earth’s surface than the sum total of everything we extracted from the dawn of humanity all the way through to 1950.

There is a place in North Carolina called Spruce Pines where they mine the purest quartz in the world. As one person Conway interviewed said:

“If you flew over the two mines in Spruce Pine with a crop duster loaded with a very particular powder, you could end the world’s production of semiconductors and solar panels within six months.”

Whereas China controls the solar panel market it is reliant on imports for its semiconductors. In 2017 this cost China more than Saudi Arabia exported in oil or the entire global trade in aircraft.

Conway muses on whether China would invade Taiwan because of this and concludes probably not.

“Even if China invaded Taiwan and even if TSMC’s fabs survived the assault…that would not resolve its issue. Fab 18 [TSMC’s plant] might be where the world’s most advanced chips are made, but they are mostly designed elsewhere”.

However it would certainly be hugely disruptive if that were your goal. So even if the share prices of the Magnificent 7 don’t plummet of their own accord, they might be eviscerated by a crop duster or an assault on Taiwan.

There are so many needles poised to prick this particular bubble it would seem prudent to be cautious as a company in how dependent you should make yourselves to AI technology over the next few years.

Trump mentions in BBC News US & Canada top feed around 4.30pm today. Out of 12 stories, 8 mention Trump by name in the headline https://www.bbc.co.uk/news/world/us_and_canada

You will have all seen the work mug staple: “The Difficult We Do Immediately. The Impossible Takes a Little Longer”. The original quotation in the title, originally attributed to Charles Alexandre de Calonne, the Finance Minister for Louis XVI, in response to a request for money from his Queen, Marie Antoinette, appeared in a collection from 1794, this was a year after Louis and Marie Antoinette (but not Charles, who survived another nine years) died on the guillotine and five since George Washington had been inaugurated as the first President of the United States. It seems as if the seemingly impossible may need to be attempted once again.

So let’s start by expanding on the problem which I brought up in my last post. The problem goes much wider than Donald Trump. He is assembling a court of loyalists around him, in the style of a mob boss, which as has been observed by others, has been the prelude to fascism in the past. As Jason Stanley, Professor of Philosophy at Yale and author of Erasing History: how fascists rewrite the past to control the future, puts it: “the United States is your enemy”. There is also considerable circumstantial evidence to suggest that Trump is considered an agent of influence by Putin’s regime in Russia.

The difficulty of what I am about to suggest is also the reason why it is so urgent: our relationship with the United States (the one we keep needing reassurance by successive US Presidents of its special nature) is positively symbiotic. George Monbiot lists some of our vulnerabilities here:

  1. Through the “Five Eyes” partnership, the UK automatically shares signals intelligence, human intelligence and defence intelligence with the US government. The two governments, with other western nations, run a wide range of joint intelligence programmes, such as Prism, Echelon, Tempora and XKeyscore. The US National Security Agency (NSA) uses the UK agency GCHQ as a subcontractor.
  2. Depending on whose definitions you accept, the US has either 11 or 13 military bases and listening stations in the UK. They include RAF Lakenheath in Suffolk, from which it deploys F-35 jets; RAF Menwith Hill in North Yorkshire, which carries out military espionage and operational support for the NSA in the US; RAF Croughton, part-operated by the CIA, which allegedly used the base to spy on Angela Merkel among many others; and RAF Fylingdales, part of the US Space Surveillance Network. If the US now sides with Russia against the UK and Europe, these could just as well be Russian bases and listening stations.
  3. Then we come to our weapon systems… among the crucial components of our defence are F-35 stealth jets, designed and patented in the US.
  4. Many of our weapons systems might be dependent on US CPUs and other digital technologies, or on US systems such as Starlink, owned by Musk, or GPS, owned by the US Space Force. Which of our weapons systems could achieve battle-readiness without US involvement and consent? Which could be remotely disabled by the US military?
  5. Then there is our independent nuclear deterrent, which is “neither British nor independent” according to Professor Norman Dombey, Emeritus Professor of Physics and Astronomy at the University of Sussex.

Then there is the sheer cost of rearming with Europe to the extent necessary in the absence of the United States’ support, suggesting 3.5% rather than 2.5% of GDP is what will be required, suggesting the UK Government, with its WCAIWCDI approach described here, will need to find something in addition to the foreign aid budget to ransack. I will be talking more about defence spending in a future post.

It is small wonder that some commentators, such as Arthur Snell, former Assistant Director for Counter-Terrorism at the Foreign and Commonwealth Office, conclude that disentangling ourselves from the United States may be impossible. And that is just considering defence and security considerations.

On the economy the symbiosis is just as evident. First of all there is the sizeable proportion of our imports and exports of both goods and services which are with the United States. Only in June 2023, we were trying hard to develop these further with something called the Atlantic Declaration. Although, as a recent speech by Megan Greene of the Bank of England’s Monetary Policy Committee shows, our trade with the US as a proportion has remained remarkably stable since 2000 at least.

Source: ONS and Bank calculations. Trade weights for each trading partner are calculated as the sum of bilateral exports and imports as a share of total UK trade. Data is annual and in current prices. EU refers to the EU27. Latest data point is 2023

Culturally, the United States is embedded in our laptops and mobile phones, our television programmes and movies, and our social media. Its concerns have permeated our language and our politics. A reasonable proportion of our political and financial elite have been to their universities and theirs to ours. Many of our employers have US parents: just in the actuarial world, two of the three biggest consultancies (Aon and Willis Towers Watson) are described as British-American firms, with the other one (Mercer) headquartered in New York. It has Apple. It has Amazon. It has Google. It has Meta and, of course, X.

And perhaps the greatest entanglement of our two countries is political, to the extent that we routinely send our politicians to each other countries to support election campaigns and our media breathlessly report every in and out of the US Presidential elections. We are lucky if a French or German one is mentioned more than a couple of weeks before it takes place. Whether it is the language thing (we are still VERY resistant to learning other languages) or the post imperial thing (feeling like we have a special understanding of the problems the United States face as a self-appointed global police force) or the degree of financialisation of our economy or for some other reason, it is very hard to avoid a sense of being conjoined with the United States of America.

But it is precisely because our relationship is so close in so many important areas that we are particularly vulnerable to US pressure – the harder it will be to disentangle ourselves, the more urgent it is that we do.

As David Allen Green puts it this week, the US is currently undergoing a diplomatic revolution. Originally applied to France’s realignment of all of its alliances away from Prussia and towards Austria, which ultimately led to the work mug motto at the start of this piece, the US appears to be realigning itself towards Russia and away from the UK and the EU. As Green goes on to say:

Other countries would now be prudent to regulate their affairs so as to minimise or eliminate their dependency on the United States – it is no longer a question of waiting out until the next United States elections.

And other political systems would be wise to limit what can be done within their own constitutions by executive order, and to strengthen the roles of the legislature and the judiciary (and also of internal independent legal advice within government).

The last seems key to me. We cannot, particularly now we are outside the EU, afford for our main ally to be capable of being so capricious. This applies whether the US are allowed to and do elect a President in 2028 who is respectful of its institutions and constitution. We always felt Americans were very respectful of their constitution because they never stopped talking about it, but it turns out to have been a thin veneer with little meaning. Much like our discussion of sovereignty in the UK.

The first thing we need to do is to stop obsessing about what John Mulaney memorably referred to as a “horse in a hospital” in 2019. Despite the fact that was five years ago and we have now seen a horse in the hospital before, many have been turned off news coverage altogether by the anxiety caused as a result of the constant media narration of what Trump and Musk have done next each day. The dangers of treating the Trump and Musk chaos as a TV show are potentially existential in the US but grave for us in the UK too.

While we may have deep sympathy for the people in the US and other countries caught up in the chaos, our priority has to be to get our own house in order. Otherwise we won’t be any help to anyone.

My priorities would be the ones I set out in October 2022, only now with much greater urgency.

  1. We can’t have parties with only 20% of the popular vote (34% of a 60% turnout) having an absolute majority of 174 seats. We need proportional representation, so that every vote counts equally and perhaps we might get somewhere near the turnout of Germany’s last election of 82.5%.
  2. Reform media ownership and promote plurality in support of a more democratic and accountable media system. The Media Reform Coalition has produced a manifesto for a people’s media which I support: it includes proposals for an Independent Media Commons – with participatory newsrooms, community radio stations, digital innovators and cultural producers, supported by democratically-controlled public resources to tell the stories of all the UK’s communities. As we know, our social media is controlled by Meta (with Facebook, WhatsApp and Instagram), all of which have more than 2 billion active users and Google with YouTube, also with more than 2 billion active users. X still has over half a billion, despite what Musk has done with it. In newspapers, 90% of daily circulation is controlled by three firms: News UK, Daily Mail Group and Reach plc (which has most of the local titles you’ve ever heard of, including the Birmingham Mail and Birmingham Live, as well as The Daily Express and the Daily Star).
  3. Reform election finance. Recommendations for doing this were provided in the July 2021 report by the Committee on Standards in Public Life. There was an eye-watering amount of money spent in the US Presidential Election this time: The Democrats spent $1.8 billion and the Republicans $1.4 billion, with $2.6 billion and $1.7 billion respectively being spent by the two parties on the Senate and House races. In the UK, paradoxically, the relatively small amount of money donated to parties mean that they are potentially more vulnerable to well organised lobbying operations. This is why the offer of $100 million by Musk to Reform led for calls to restrict foreign political donations to profits generated within the UK.

This way we would be more resilient to the many ways that the current chaotic United States establishment can reach into our own politics and governance, and start to develop policies with broad support which can reduce our dependency on the United States.

Risk trajectory (black circle) shows the anticipated future state for the risk in 2050. Current risk position in grey. Source: https://actuaries.org.uk/planetary-solvency

The excellent report from the Institute and Faculty of Actuaries and the University of Exeter Planetary Solvency – finding our balance with nature splits the risk trajectories into four sections: Climate, Nature, Society and Economy. I have focused on the Society one above as, in my view, this is the reason we are interested in all of the other ones. According to the Planetary Solvency report, we are on track for a society in 2050 described as follows:

Nature and climate risk trajectories will drive further biophysical constraints including stresses on water supply, further food supply impacts, heat stress, increased disease vectors, likely to drive migration and conflict. Possible to Likely risk of Severe to Decimation level societal impacts, with increasingly severe direct and indirect consequences of climate and nature risks driving socio-political fragmentation in exposed and vulnerable regions.

So what are we doing about it? Well the United States has just voted in Donald Trump as President. There was a flurry of executive orders issued in his first week (with the appropriate caveats about how many of these might actually be implemented), the climate-related ones of which are neatly summarised here by Bill McKibben:

The attacks on sensible energy policy have been swift and savage. We exited the Paris climate accords, paused IRA spending, halted wind and solar projects, gutted the effort to help us transition to electric vehicles, lifted the pause on new LNG export projects, canceled the Climate Corps just as it was getting off the ground, and closed the various government agencies dedicated to environmental justice. Oh, and we declared an “energy emergency” to make it easier to do all of the above.

Timothy Snyder has written about how to respond to tyranny in your own country. What is happening currently in the United States is threatening tyranny for many (as Robert Reich lists here):

The government now recognizes only two “immutable” genders, male and female. Migrants (now referred to as “aliens”) are being turned away at the border. Immigration agents are freed to target hospitals, schools, and churches in search of people to deport. Diversity efforts in the federal government have been dismantled and employees turned into snitches. Federal money will be barred from paying for many abortions.

The first thing you should do, according to Timothy Snyder, is to not obey in advance.

Most of the power of authoritarianism is freely given. In times like these, individuals think ahead about what a more repressive government will want, and then offer themselves without being asked. A citizen who adapts in this way is teaching power what it can do.

And how did we respond to all of this in the UK? Well Keir Starmer was keen to tell The Donald that we were deregulating to boost growth in their first phone call. His reward for this was the story that Trump thought he was doing a good job. Supposedly an endorsement from the “Drill Baby Drill” guy is the proper corrective from being told he should be locked up by the Nazi salute guy.

And then there were the actions on the environment. From the talking out of the Climate and Nature Bill which sought to meet new legally binding targets on climate change and protect nature. To a housing policy which will be both hugely environmentally destructive and fail to make houses more affordable. To announcing the intention to overhaul the planning rules, in the upcoming Planning and Infrastructure Bill, to reduce the power of people to object (and, as the Conservatives’ restrictions on protest have not been lifted, subsequently bang them up for years on end if we subsequently demonstrate about it) so that global firms would think that the UK was a “great place to invest” .

And then today we had Rachel Reeves’ big speech. Approval for developing the third runway at Heathrow, as had been extensively trailed, and the creation of “Europe’s Silicon Valley” between Oxford and Cambridge were the main announcements. There was quite a lot of talk about investment in sustainable aviation fuel (which means biofuels, the benefits of which have already been shown to be wiped out by rising demand).

And as for the Silicon Valley idea, I am not sure we want one. First there is the lack of real innovation despite the excellent game they talk. And second, is it going to be the authoritarian nightmare that the Californian one is turning into? The early signs are not good. Just last week Marcus Bokkerink, the Chair of the Competition and Markets Authority (CMA), was replaced by Doug Gurr, until recently Jeff Bezos’ head of Amazon UK. So not exactly standing up to Technofeudalism then.

According to Cory Doctorow:

Marcus Bokkerink, the outgoing head of the CMA, was amazing, and he had charge over the CMA’s Digital Markets Unit, the largest, best-staffed technical body of any competition regulator, anywhere in the world. The DMU uses its investigatory powers to dig deep into complex monopolistic businesses like Amazon, and just last year, the DMU was given new enforcement powers that would let it custom-craft regulations to address tech monopolization (again, like Amazon’s).

But it’s even worse. The CMA and DMU are the headwaters of a global system of super-effective Big Tech regulation. The CMA’s deeply investigated reports on tech monopolists are used as the basis for EU regulations and enforcement actions, and these actions are then re-run by other world governments, like South Korea and Japan.

When you see Trump flanked by Bezos and the other Tech Bros at his inauguration, it certainly feels like we are obeying in advance. Rachel Reeves’ speech had an enormous increase in energy demand implicit in pretty much every measure announced, which is expected because, GDP (the thing she is looking to boost) and energy consumption have been in lockstep forever. This is the implication of prioritising GDP growth over everything else.

What were missing were both a compensatory increase in renewable energy capacity and/or a reorganisation of our economy away from energy intensity. The problem for the government is that the latter would not increase GDP, so instead we get into the absurd position of the Business Secretary saying we “cannot afford to not build runways”.

However it seems that when the motivation is big enough (in this case to dispute the assertion that the Russian economy is doing well in wartime despite the official statistics, which the EU really needs to do in order to continue to make the case for sanctions) alternative ways to measure the economy can be found. In section 3.2 we find this:

The general assumption of connecting GDP growth to making people better off is not relevant in this situation, which should be included in any discussion of how the Russian economy is doing.

What is interesting about this analysis is that:

a. It is carried out by the kind of orthodox economists (the Stockholm Institute of Transition Economics) who believe GDP would be a good index to use in normal circumstances; and

b. They are saying this even if the GDP figures published by Russia are technically accurate. As they go on to say:

What this analysis suggests is that if we believe in official Russian statistics, then Russia has economic capacity to sustain current policies in the short run, a conclusion shared with many other observers. We also find, though, that beyond the GDP numbers, the redirection into a war economy is already putting pressure on all sectors not directly involved in the war, causing internal macroeconomic imbalances, increasing risks in the financial sector, and eroding export revenues and existing reserves. Short term growth is kept up by a massive fiscal stimulus, but the impact is mitigated by necessary monetary contraction to deal with inflationary pressures, and structural factors (demographics, weak property rights) limiting the possible economic response to the stimulus.

Some of which sound familiar closer to home – “necessary monetary contraction” (things we cannot afford) and “increasing risks in the financial sector” anyone?

We are currently facilitating a world where the only capacity we are increasing is to fly over the climate-ravaged areas of the globe and their fleeing populations. Fly Baby Fly is not going to get us anywhere we want to go.