In 2017, I was rather excitedly reporting about ideas which were new to me at the time regarding how technology or, as Richard and Daniel Susskind referred to it in The Future of the Professions, “increasingly capable machines” were going to affect professional work. I concluded that piece as follows:

The actuarial profession and the higher education sector therefore need each other. We need to develop actuaries of the future coming into your firms to have:

  • great team working skills
  • highly developed presentation skills, both in writing and in speech
  • strong IT skills
  • clarity about why they are there and the desire to use their skills to solve problems

All within a system which is possible to regulate in a meaningful way. Developing such people for the actuarial profession will need to be a priority in the next few years.

While all of those things are clearly still needed, it is becoming increasingly clear to me now that they will not be enough to secure a job as industry leaders double down.

Source: https://www.ft.com/content/99b6acb7-a079-4f57-a7bd-8317c1fbb728

And perhaps even worse than the threat of not getting a job immediately following graduation is the threat of becoming a reverse-centaur. As Cory Doctorow explains the term:

A centaur is a human being who is assisted by a machine that does some onerous task (like transcribing 40 hours of podcasts). A reverse-centaur is a machine that is assisted by a human being, who is expected to work at the machine’s pace.

We have known about reverse-centaurs since at least Charlie Chaplin’s Modern Times in 1936.

By Charlie Chaplin – YouTube, Public Domain, https://commons.wikimedia.org/w/index.php?curid=68516472

Think Amazon driver or worker in a fulfillment centre, sure, but now also think of highly competitive and well-paid but still ultimately human-in-the-loop kinds of roles being responsible for AI systems designed to produce output where errors are hard to spot and therefore to stop. In the latter role you are the human scapegoat, in the phrasing of Dan Davies, “an accountability sink” or in that of Madeleine Clare Elish, a “moral crumple zone” all rolled into one. This is not where you want to be as an early career professional.

So how to avoid this outcome? Well obviously if you have other options to roles where a reverse-centaur situation is unavoidable you should take them. Questions to ask at interview to identify whether the role is irretrievably reverse-centauresque would be of the following sort:

  1. How big a team would I be working in? (This might not identify a reverse-centaur role on its own: you might be one of a bank of reverse-centaurs all working in parallel and identified “as a team” while in reality having little interaction with each other).
  2. What would a typical day be in the role? This should smoke it out unless the smokescreen they put up obscures it. If you don’t understand the first answer, follow up to get specifics.
  3. Who would I report to? Get to meet them if possible. Establish whether they are technical expert in the field you will be working in. If they aren’t, that means you are!
  4. Speak to someone who has previously held the role if possible. Although bear in mind that, if it is a true reverse-centaur role and their progress to an actual centaur role is contingent on you taking this one, they may not be completely forthcoming about all of the details.

If you have been successful in a highly competitive recruitment process, you may have a little bit of leverage before you sign the contract, so if there are aspects which you think still need clarifying, then that is the time to do so. If you recognise some reverse-centauresque elements from your questioning above, but you think the company may be amenable, then negotiate. Once you are in, you will understand a lot more about the nature of the role of course, but without threatening to leave (which is as damaging to you as an early career professional as it is to them) you may have limited negotiation options at that stage.

In order to do this successfully, self knowledge will be key. It is that point from 2017:

  • clarity about why they are there and the desire to use their skills to solve problems

To that word skills I would now add “capabilities” in the sense used in a wonderful essay on this subject by Carlo Iacono called Teach Judgement, Not Prompts.

You still need the skills. So, for example, if you are going into roles where AI systems are producing code, you need to have sufficiently good coding skills yourself to create a programme to check code written by the AI system. If the AI system is producing communications, your own communication skills need to go beyond producing work that communicates to an audience effectively to the next level where you understand what it is about your own communication that achieves that, what is necessary, what is unnecessary, what gets in the way of effective communication, ie all of the things that the AI system is likely to get wrong. Then you have a template against which to assess the output from an AI system, and for designing better prompts.

However specific skills and tools come and go, so you need to develop something more durable alongside them. Carlo has set out four “capabilities” as follows:

  1. Epistemic rigour, which is being very disciplined about challenging what we actually know in any given situation. You need to be able to spot when AI output is over-confident given the evidence, or when a correlation is presented as causation. What my tutors used to refer to as “hand waving”.
  2. Synthesis is about integrating different perspectives into an overall understanding. Making connections between seemingly unrelated areas is something AI systems are generally less good at than analysis.
  3. Judgement is knowing what to do in a new situation, beyond obvious precedent. You get to develop judgement by making decisions under uncertainty, receiving feedback, and refining your internal models.
  4. Cognitive sovereignty is all about maintaining your independence of thought when considering AI-generated content. Knowing when to accept AI outputs and when not to.

All of these capabilities can be developed with reflective practice, getting feedback and refining your approach. As Carlo says:

These capabilities don’t just help someone work with AI. They make someone worth augmenting in the first place.

In other words, if you can demonstrate these capabilities, companies who themselves are dealing with huge uncertainty about how much value they are getting from their AI systems and what they can safely be used for will find you an attractive and reassuring hire. Then you will be the centaur, using the increasingly capable systems to improve your own and their productivity while remaining in overall control of the process, rather than a reverse-centaur for which none of that is true.

One sure sign that you are straying into reverse-centaur territory is when a disproportionate amount of your time is spent on pattern recognition (eg basing an email/piece of coding/valuation report on an earlier email/piece of coding/valuation report dealing with a similar problem). That approach was always predicated on being able to interact with a more experienced human who understood what was involved in the task at some peer review stage. But it falls apart when there is no human to discuss the earlier piece of work with, because the human no longer works there, or a human didn’t produce the earlier piece of work. The fake it until you make it approach is not going to work in environments like these where you are more likely to fake it until you break it. And pattern recognition is something an AI system will always be able to do much better and faster than you.

Instead, question everything using the capabilities you have developed. If you are going to be put into potentially compromising situations in terms of the responsibilities you are implicitly taking on, the decisions needing to be made and the limitations of the available knowledge and assumptions on which those decisions will need to be based, then this needs to be made explicit, to yourself and the people you are working with. Clarity will help the company which is trying to use these new tools in a responsible way as much as it helps you. Learning is going to be happening for them as much as it is for you here in this new landscape.

And if the company doesn’t want to have these discussions or allow you to hamper the “efficiency” of their processes by trying to regulate them effectively? Then you should leave as soon as you possibly can professionally and certainly before you become their moral crumple zone. No job is worth the loss of your professional reputation at the start of your career – these are the risks companies used to protect their senior people of the future from, and companies that are not doing this are clearly not thinking about the future at all. Which is likely to mean that they won’t have one.

To return to Cory Doctorow:

Science fiction’s superpower isn’t thinking up new technologies – it’s thinking up new social arrangements for technology. What the gadget does is nowhere near as important as who the gadget does it for and who it does it to.

You are going to have to be the generation who works these things out first for these new AI tools. And you will be reshaping the industrial landscape for future generations by doing so.

And the job of the university and further education sectors will increasingly be to equip you with both the skills and the capabilities to manage this process, whatever your course title.

Source: https://pluspng.com/img-png/mixed-economy-png–901.png

Just type “mixed economy graphic” into Google and you will get a lot of diagrams like this one – note that they normally have to pick out the United States for special mention. Notice the big gap between those countries – North Korea, Cuba, China and Russia – and us. It is a political statement masquerading as an economic one.

This same line is used to describe our political options. The Political Compass added an authoritarian/libertarian axis in their 2024 election manifesto analysis but the line from left to right (described as the economic scale) is still there:

Source: https://www.politicalcompass.org/uk2024

So here we are on our political and economic spectrum, where tiny movements between the very clustered Reform, Conservative, Labour and Liberal Democrat positions fill our newspapers and social media comment. The Greens and, presumably if it ever gets off the ground, Your Party are seen as so far away from the cluster that they often get left out of our political discourse. It is an incredibly narrow perspective and we wonder why we are stuck on so many major societal problems.

This is where we have ended up following the “slow singularity” of the Industrial Revolution I talked about in my last post. Our politics coalesced into one gymnasts’ beam, supported by the hastily constructed Late Modern English fashioned for this purpose in the 1800s, along which we have all been dancing ever since, between the market information processors at the “right” end and the bureacratic information processors at the “left” end.

So what does it mean for this arrangement if we suddenly introduce another axis of information processing, ie the large language AI models. I am imagining something like this:

What will this mean for how countries see their economic organisation? What will it mean for our politics?

In 1884, the English theologian, Anglican priest and schoolmaster Edwin Abbott Abbott published a satirical science fiction novella called Flatland: A Romance of Many Dimensions. Abbott’s satire was about the rigidity of Victorian society, depicted as a two-dimensional world inhabited by geometric figures: women are line segments, while men are polygons with various numbers of sides. We are told the story from the viewpoint of a square, which denotes a gentleman or professional. In this world three-dimensional shapes are clearly incomprehensible, with every attempt to introduce new ideas from this extra dimension considered dangerous. Flatland is not prepared to receive “revelations from another world”, as it describes anything existing in the third dimension, which is invisible to them.

The book was not particularly well received and fell into obscurity until it was embraced by mathematicians and physicists in the early 20th century as the concept of spacetime was being developed by Poincaré, Einstein and Minkowski amongst others. And what now looks like a prophetic analysis of the limitations of the gymnasts’ beam economic and political model of the slow singularity has continued to not catch on at all.

However, much as with Brewster’s Millions, the incidence of film adaptations of Flatland give some indication of when it has come back as an idea to some extent. This tells us that it wasn’t until 1965 until someone thought it was a good idea to make a movie of Flatland and then noone else attempted it until an Italian stop-motion film in 1982. There were then two attempts in 2007, which I can’t help but think of as a comment on the developing financial crisis at the time, and a sequel based on Bolland : een roman van gekromde ruimten en uitdijend heelal (which translates as: Sphereland: A Fantasy About Curved Spaces and an Expanding Universe), a 1957 sequel to Flatland in Dutch (which didn’t get translated into English until 1965 when the first animated film came out) by Dionys Burger, in 2012.

So here we are, with a new approach to processing information and language to sit alongside the established processors of the last 200 years or more. Will it perhaps finally be time to abandon Flatland? And if we do, will it solve any of our problems or just create new ones?

In 2017 I posted an article about how the future for actuaries was starting to look, with particular reference to a Society of Actuaries paper by Dodzi Attimu and Bryon Robidoux, which has since been moved to here.

I summarised their paper as follows at the time:

Focusing on…a paper produced by Dodzi Attimu and Bryon Robidoux for the Society of Actuaries in July 2016 explored the theme of robo actuaries, by which they meant software that can perform the role of an actuary. They went on to elaborate as follows:

Though many actuaries would agree certain tasks can and should be automated, we are talking about more than that here. We mean a software system that can more or less autonomously perform the following activities: develop products, set assumptions, build models based on product and general risk specifications, develop and recommend investment and hedging strategies, generate memos to senior management, etc.

They then went on to define a robo actuarial analyst as:

A system that has limited cognitive abilities but can undertake specialized activities, e.g. perform the heavy lifting in model building (once the specification/configuration is created), perform portfolio optimization, generate reports including narratives (e.g. memos) based on data analysis, etc. When it comes to introducing AI to the actuarial profession, we believe the robo actuarial analyst would constitute the first wave and the robo actuary the second wave.

They estimate that the first wave is 5 to 10 years away and the second 15 to 20 years away. We have been warned.

So 9 years on from their paper, how are things looking? Well the robo actuarial analyst wave certainly seems to be pretty much here, particularly now that large language models like ChatGPT are being increasingly used to generate reports. It suddenly looks a lot less fanciful to assume that the full robo actuary is less than 11 years away.

But now the debate on AI appears to be shifting to an argument between whether we are heading for Vernor Vinge’s “Singularity” where the increasingly capable systems

would not be humankind’s “tool” — any more than humans are the tools of rabbits or robins or chimpanzees

on the one hand, and, on the other, the idea that “it is going to take a long time for us to really use AI properly…, because of how hard it is to regear processes and organizations around new tech”.

In his article on Understanding AI as a social technology, Henry Farrell suggests that neither of these positions allow a proper understanding of the impact AI is likely to have, instead proposing the really interesting idea that we are already part way through a “slow singularity”, which began with the industrial revolution. As he puts it:

Under this understanding, great technological changes and great social changes are inseparable from each other. The reason why implementing normal technology is that so slow is that it requires sometimes profound social and economic transformations, and involves enormous political struggle over which kinds of transformation ought happen, which ought not, and to whose benefit.

This chimes with what I was saying recently about AI possibly not being the best place to look for the next industrial revolution. Farrell plausibly describes the current period using the words of Herbert Simon. As Farrell says: “Human beings have quite limited internal ability to process information, and confront an unpredictable and complex world. Hence, they rely on a variety of external arrangements that do much of their information processing for them.” So Simon says of markets, for instance, which:

appear to conserve information and calculation by assigning decisions to actors who can make them on the basis of information that is available to them locally – that is, without knowing much about the rest of the economy apart from the prices and properties of the goods they are purchasing and the costs of the goods they are producing.

And bureaucracies and business organisations, similarly:

like markets, are vast distributed computers whose decision processes are substantially decentralized. … [although none] of the theories of optimality in resource allocation that are provable for ideal competitive markets can be proved for hierarchy, … this does not mean that real organizations operate inefficiently as compared to real markets. … Uncertainty often persuades social systems to use hierarchy rather than markets in making decisions.

Large language models by this analysis are then just another form of complex information processing, “likely to reshape the ways in which human beings construct shared knowledge and act upon it, with their own particular advantages and disadvantages. However, they act on different kinds of knowledge than markets and hierarchies”. As an Economist article Farrell co-wrote with Cosma Shalizi says:

We now have a technology that does for written and pictured culture what largescale markets do for the economy, what large-scale bureaucracy does for society, and perhaps even comparable with what print once did for language. What happens next?

Some suggestions follow and I strongly recommend you read the whole thing. However, if we return to what I and others were saying in 2016 and 2017, it may be that we were asking the wrong question. Perhaps the big changes of behaviour required of us to operate as economic beings have already happened (the start of the “slow singularity” of the industrial revolution) and the removal of alternatives that required us to spend increasing proportions of our time within and interacting with bureacracies and other large organisations were the logical appendage to that process. These processes are merely becoming more advanced rather than changing fundamentally in form.

And the third part, ie language? What started with the emergence of Late Modern English in the 1800s looks like it is now being accelerated via a new way of complex information processing applied to written, pictured (and I would say also heard) culture.

So the future then becomes something not driven by technology, but by our decisions about which processes we want to allow or even encourage and which we don’t, whether those are market processes, organisational processes or large language processes. We don’t have to have robo actuaries or even robo actuarial analysts, but we do have to make some decisions.

And students entering this arena need to prepare themselves to be participants in those decisions rather than just victims of them. A subject I will be returning to.

Title page vignette of Hard Times by Charles Dickens. Thomas Gradgrind Apprehends His Children Louisa and Tom at the Circus, 1870

It was Fredric Jameson (according to Owen Hatherley in the New Statesman) who first said:

“It seems to be easier for us today to imagine the thoroughgoing deterioration of the earth and of nature than the breakdown of late capitalism”. I was reminded of this by my reading this week.

It all started when I began watching Shifty, Adam Curtis’ latest set of films on iPlayer aiming to convey a sense of shifting power structures and where they might lead. Alongside the startling revelation that The Land of Make Believe by Bucks Fizz was written as an anti-Thatcher protest song, there was a short clip of Eric Hobsbawm talking about all of the words which needed to be invented in the late 18th century and early 19th to allow people to discuss the rise of capitalism and its implications. So I picked up a copy of his The Age of Revolution 1789-1848 to look into this a little further.

The first chapter of Hobsbawm’s introduction from 1962, the year of my birth, expanded on the list:

Words are witnesses which often speak louder than documents. Let us consider a few English words which were invented, or gained their modern meanings, substantially in the period of sixty years with
which this volume deals. They are such words as ‘industry’, ‘industrialist’, ‘factory’, ‘middle class’, ‘working class’, ‘capitalism’ and ‘socialism’. They include ‘aristocracy’ as well as ‘railway’, ‘liberal’ and
‘conservative’ as political terms, ‘nationality’, ‘scientist’ and ‘engineer’, ‘proletariat’ and (economic) ‘crisis’. ‘Utilitarian’ and ‘statistics’, ‘sociology’ and several other names of modern sciences, ‘journalism’ and ‘ideology’, are all coinages or adaptations of this period. So is ‘strike’ and ‘pauperism’.

What is striking about these words is how they frame most of our economic and political discussions still. The term “middle class” originated in 1812. Noone referred to an “industrial revolution” until English and French socialists did in the 1820s, despite what it described having been in progress since at least the 1780s.

Today the founder of the World Economic Forum has coined the phrase “Fourth Industrial Revolution” or 4IR or Industry 4.0 for those who prefer something snappier. Its blurb is positively messianic:

The Fourth Industrial Revolution represents a fundamental change in the way we live, work and relate to one another. It is a new chapter in human development, enabled by extraordinary technology advances commensurate with those of the first, second and third industrial revolutions. These advances are merging the physical, digital and biological worlds in ways that create both huge promise and potential peril. The speed, breadth and depth of this revolution is forcing us to rethink how countries develop, how organisations create value and even what it means to be human. The Fourth Industrial Revolution is about more than just technology-driven change; it is an opportunity to help everyone, including leaders, policy-makers and people from all income groups and nations, to harness converging technologies in order to create an inclusive, human-centred future. The real opportunity is to look beyond technology, and find ways to give the greatest number of people the ability to positively impact their families, organisations and communities.

Note that, despite the slight concession in the last couple of sentences that an industrial revolution is about more then technology-driven change, they are clear that the technology is the main thing. It is also confused: is the future they see one in which “technology advances merge the physical, digital and biological worlds” to such an extent that we have “to rethink” what it “means to be human”? Or are we creating an “inclusive, human-centred future”?

Hobsbawm describes why utilitarianism (” the greatest happiness of the greatest number”) never really took off amongst the newly created middle class, who rejected Hobbes in favour of Locke because “he at least put private property beyond the range of interference and attack as the most basic of ‘natural rights'”, whereas Hobbes would have seen it as just another form of utility. This then led to this natural order of property ownership being woven into the reassuring (for property owners) political economy of Adam Smith and the natural social order arising from “sovereign individuals of a certain psychological constitution pursuing their self-interest in competition with one another”. This was of course the underpinning theory of capitalism.

Hobsbawm then describes the society of Britain in the 1840s in the following terms:

A pietistic protestantism, rigid, self-righteous, unintellectual, obsessed with puritan morality to the point where hypocrisy was its automatic companion, dominated this desolate epoch.

In 1851 access to the professions in Britain was extremely limited, requiring long years of education to support oneself through and opportunities to do so which were rare. There were 16,000 lawyers (not counting judges) but only 1,700 law students. There were 17,000 physicians and surgeons and 3,500 medical students and assistants. The UK population in 1851 was around 27 million. Compare these numbers to the relatively tiny actuarial profession in the UK today, with around 19,000 members overall in the UK.

The only real opening to the professions for many was therefore teaching. In Britain “76,000 men and women in 1851 described themselves as schoolmasters/mistresses or general teachers, not to mention the 20,000 or so governesses, the well-known last resource of penniless educated girls unable or unwilling to earn their living in less respectable ways”.

Admittedly most professions were only just establishing themselves in the 1840s. My own, despite actuarial activity getting off the ground in earnest with Edmund Halley’s demonstration of how the terms of the English Government’s life annuities issue of 1692 were more generous than it realised, did not form the Institute of Actuaries (now part of the Institute and Faculty of Actuaries) until 1848. The Pharmaceutical Society of Great Britain (now the Royal Pharmaceutical Society) was formed in 1841. The Royal College of Veterinary Surgeons was established by royal charter in 1844. The Royal Institute of British Architects (RIBA) was founded in 1834. The Society of Telegraph Engineers, later the Institute of Electrical Engineers (now part of the Institute of Engineering and Technology), was formed in 1871. The Edinburgh Society of Accountants and the Glasgow Institute of Accountants and Actuaries were granted royal charters in the mid 1850s, before England’s various accounting institutes merged into the Institute of Chartered Accountants in England and Wales in 1880.

However “for every man who moved up into the business classes, a greater number necessarily moved down. In the second place economic independence required technical qualifications, attitudes of mind, or financial resources (however modest) which were simply not in the possession of most men and women.” As Hobsbawm goes on to say, it was a system which:

…trod the unvirtuous, the weak, the sinful (i.e. those who neither made money nor controlled their emotional or financial expenditures) into the mud where they so plainly belonged, deserving at best only of their betters’ charity. There was some capitalist economic sense in this. Small entrepreneurs had to plough back much of their profits into the business if they were to become big entrepreneurs. The masses of new proletarians had to be broken into the industrial rhythm of labour by the most draconic labour discipline, or left to rot if they would not accept it. And yet even today the heart contracts at the sight of the landscape constructed by that generation.

This was the landscape upon which the professions alongside much else of our modern world were constructed. The industrial revolution is often presented in a way that suggests that technical innovations were its main driver, but Hobsbawm shows us that this was not so. As he says:

Fortunately few intellectual refinements were necessary to make the Industrial Revolution. Its technical inventions were exceedingly modest, and in no way beyond the scope of intelligent artisans experimenting in their workshops, or of the constructive capacities of carpenters, millwrights and locksmiths: the flying shuttle, the spinning jenny, the mule. Even its scientifically most sophisticated machine, James Watt’s rotary steam-engine (1784), required no more physics than had been available for the best part of a century—the proper theory of steam engines was only developed ex post facto by the Frenchman Carnot in the 1820s—and could build on several generations of practical employment for steam engines, mostly in mines.

What it did require though was the obliteration of alternatives for the vast majority of people to “the industrial rhythm of labour” and a radical reinvention of the language.

These are not easy things to accomplish which is why we cannot easily imagine the breakdown of late capitalism. However if we focus on AI etc as the drivers of the next industrial revolution, we will probably be missing where the action really is.

I have just been reading Adrian Tchaikovsky’s Service Model. I am sure I will think about it often for years to come.

Imagine a world where “Everything was piles. Piles of bricks and shattered lumps of concrete and twisted rods of rebar. Enough fine-ground fragments of glass to make a whole razory beach. Shards of fragmented plastic like tiny blunted knives. A pall of ashen dust. And, to this very throne of entropy, someone had brought more junk.”

This is Earth outside a few remaining enclaves. And all served by robots, millions of robots.

Robots: like our protagonist (although he would firmly resist such a designation) Uncharles, who has been programmed to be a valet, or gentleman’s gentlerobot; or librarians tasked with preserving as much data from destruction or unauthorised editing as possible; or robots preventing truancy from the Conservation Farm Project where some of the few remaining humans are conscripted to reenact human life before robots; or the fix-it robots; or the warrior robots prosecuting endless wars.

Uncharles, after slitting the throat of his human master for no reason that he can discern, travels this landscape with his hard-to-define-and-impossible to-shut-up companion The Wonk, who is very good at getting into places but often not so good at extracting herself. Until they finally arrive in God’s waiting room and take a number.

Along the way The Wonk attempts to get Uncharles to accept that he has been infected with a Protagonist Virus, which has given Uncharles free will. And Uncharles finds his prognosis routines increasingly unhelpful to him as he struggles to square the world he is perambulating with the internal model of it he carries inside him.

The questions that bounce back between our two unauthorised heroes are many and various, but revolve around:

  1. Is there meaning beyond completing your task list or fulfilling the function for which you were programmed?
  2. What is the purpose of a gentleman’s gentlerobot when there are no gentlemen left?
  3. Is the appearance of emotion in some of Uncharles’ actions and communications really just an increasingly desperate attempt to reduce inefficient levels of processing time? Or is the Protagonist Virus an actual thing?

Ultimately the question is: what is it all for? And when they finally arrive in front of God, the question is thrown back at us, the pile of dead humans rotting across the landscape of all our trash.

This got me thinking about a few things in a different way. One of these was AI.

Suppose AI is half as useful as OpenAI and others are telling us it will be. Suppose that we can do all of these tasks in less than half the time. How is all of that extra time going to be distributed? In 1930 Keynes speculated that his grandchildren would only need to work a 15 hour week. And all of the productivity improvements he assumed in doing so have happened. Yes still full-time work remains the aspiration.

There certainly seems to have been a change of attitude from around 1980 onwards, with those who could choose choosing to work longer, for various reasons which economists are still arguing about, and therefore the hours lost were from those who couldn’t choose, as The Resolution Foundation have pointed out. Unfortunately neither their pay, nor their quality of work, have increased sufficiently for those hours to meet their needs.

So, rather than asking where the hours have gone, it probably makes more sense to ask where the money has gone. And I think we all know the answer to that one.

When Uncharles and The Wonk finally get in to see God, God gives an example of a seat designed to stop vagrants sleeping on it as the indication it needed of the kind of society humans wanted. One where the rich wanted not to have to see or think about the poor. Replacing all human contact with eternally indefatigable and keen-to-serve robots was the world that resulted.

Look at us clever humans, constantly dreaming of ways to increase our efficiency, remove inefficient human interaction, or indeed any interaction which cannot be predicted in advance. Uncharles’ seemingly emotional responses, when he rises above the sea of task-queue-clutching robots all around him, are to what he sees as inefficiency. But what should be the goal? Increasing GDP can’t be it, that is just another means. We are currently working extremely hard and using a huge proportion of news and political affairs airtime and focus on turning the English Channel into the seaborne equivalent of the seat where vagrants and/or migrants cannot rest.

So what should be the goal? Because the reason Service Model will stay with me for some time to come is that it shows us what happens if we don’t have one. The means take over. It seems appropriate to leave the last word to a robot.

“Justice is a human-made thing that means what humans wish it to mean and does not exist at all if humans do not make it,” Uncharles says at one point. “I suggest that ‘kind and ordered’ is a better goal.”

Last time I suggested that the changes to graduate recruitment patterns, due at least in part to technological change, appeared to be to the disadvantage of current graduates, both in terms of number of vacancies and in what they were being asked to do.

This immediately reminds me of the old Woody Allen joke from the opening monologue to Annie Hall:

Two elderly women are at a Catskills mountain resort, and one of ’em says: “Boy, the food at this place is really terrible.” The other one says, “Yeah, I know, and such … small portions.”

This would clearly be an uncomfortable position for Corporate Britain if it were accepted. So a push back is to be expected. The drop in graduate vacancies is hard to challenge so the next candidate is obviously the candidates themselves.

So hot on the heels of “Kids today need more discipline”, “Nobody wants to work”, “Students today aren’t prepared for college”, “Kids today are lazy”, “We are raising a generation of wimps” and “Kids today have too much freedom” (I refer you to Paul Fairie’s excellent collections of newspaper reports through history detailing these findings at regular intervals), we now have the FT, newspaper of choice for Corporate Britain, weighing in on “The Troubling Decline in Conscientiousness“, this time backed up by a whole series of graphs:

John Burn-Murdoch does a lot of great data work on a huge array of subjects which I have referred to often, but I find the quoted studies problematic for a number of reasons. First of all, there is the suspicion that young people have already been found guilty before looking for evidence to back this up. For instance, which came first here the “factors at work” or the “shifts”?

While a full explanation of these shifts requires thorough investigation, and there will be many factors at work, smartphones and streaming services seem likely culprits.

At one point John feels compelled to say:

While the terminology of personality can feel vague, the science is solid.

At which point he links to this study, defending the five-factor model of personality as a “biologically based human universal” which terrifies me a little. Now of course there are always studies pointing in lots of different directions for any piece of social science research and this is no exception. In this critique of the five-factor model (FFM), for instance, we find that:

While the two largest factors (Anxiety/Neuroticism and Extraversion) appear to have been universally accepted (e.g., in the pioneering factor-analytic work of R. B. Cattell, H. J. Eysenck, J. P. Guilford, and A. L. Comrey), the present critique suggests, nevertheless, that the FFM provides a less than optimal account of human personality structure.

I first saw the FT article via a post on LinkedIn, where there was one mild push back sitting alone amongst crowds of pile ons from people of my generation. After all it feels right, doesn’t it? But Chris Wagstaff, Senior Visiting Fellow at Bayes Business School, was spot on I feel, when he pointed out four potential behavioural biases at play here within the organisations where these young people are working:

  1. The decline in conscientiousness and some of the other traits identified could be a consequence of more senior colleagues not inviting or taking on board constructive challenge from younger colleagues, the calamity of conformity, i.e. groupthink, so demotivating the latter.
  2. Related to this is the tendency for many organisations to get their employees to live and breathe an often meaningless set of values and adhere to a blinkered way of doing things. Again, hugely frustrating and demotivating.
  3. Or perhaps we’re seeing way too many meetings being populated by way too many participants, meaning social loafing (ie when individual performance isn’t visible they simply hide behind others) is on the increase.
  4. Finally, remuneration structures might discourage entrepreneurial thinking and an element of risk taking (younger folk are less risk averse than older folk). Again, very demotivating.

These sound much more convincing “factors at play” to me than smart phones or streaming services, neither of which of course are the preserve of the young. But demonising the young is an essential prelude to feeling better about denying them work or forcing them into some kind of reverse centaur position.

Corporate Britain needs to do better than pseudo-scientific victim blaming. There are real issues here around the next generation’s relationship with work and much else which need to be met head on. Your future pension income may depend upon it.

In a previous post, I mentioned the “diamond model” that accountancy firms are reportedly starting to talk about. The impact so far looks pretty devastating for graduates seeking work:

And then by industry:

Meanwhile, Microsoft have recently produced a report into the occupational implications of generative AI and their top 40 vulnerable roles looks like this (look at where data scientist, mathematician and management analyst sit – all noticeably more replaceable by AI than model which caused all the headlines when Vogue did it last week):

So this looks like a process well underway rather than a theoretical one for the future. But I want to imagine a few years ahead. Imagine that this process has continued to gut what we now regard as entry level jobs and that the warning of Dario Amodei, CEO of AI company Anthropic, that half of “administrative, managerial and tech jobs for people under 30” could be gone in 5 years, has come to pass. What then?

Well this is where it gets interesting (for some excellent speculative fiction about this, the short story Human Resources and novel Service Model by Adrian Tchaikovsky will certainly give you something to think about), because there will still be a much smaller number of jobs in these roles. They will be very competitive. Perhaps we will see FBI kind of recruitment processes becoming more common for the rarified few, probably administered by the increasingly capable systems I discuss below. They will be paid a lot more. However, as Cory Doctorow describes here, the misery of being the human in the loop for an AI system designed to produce output where errors are hard to spot and therefore to stop (Doctorow calls them, “reverse centaurs”, ie humans have become the horse part) includes being the ready made scapegoat (or “moral crumple zone” or “accountability sink“) for when they are inevitably used to overreach what they are programmed for and produce something terrible. The AI system is no longer working for you as some “second brain”. You are working for it, but no company is going to blame the very expensive AI system that they have invested in when there is a convenient and easily-replaceable (remember how hard these jobs will be to get) human candidate to take the fall. And it will be assumed that people will still do these jobs, reasoning that it is the only route to highly paid and more secure jobs later, or that they will be able to retire at 40, as the aspiring Masters of the Universe (the phrase coined by Tom Wolfe in The Bonfire of the Vanities) in the City of London have been telling themselves since the 1980s, only this time surrounded by robot valets no doubt.

But a model where all the gains go to people from one, older, generation at the expense of another, younger, generation depends on there being reasonable future prospects for that younger generation or some other means of coercing them.

In their book, The Future of the Professions, Daniel and Richard Susskind talk about the grand bargain. It is a form of contract, but, as they admit:

The grand bargain has never formally been reduced to writing and signed, its terms have never been unambiguously and exhaustively articulated, and noone has actually consented expressly to the full set of rights and obligations that it seems to lay down.

Atul Gawande memorably expressed the grand bargain for the medical profession (in Better) as follows:

The public has granted us extraordinary and exclusive dispensation to administer drugs to people, even to the point of unconsciousness, to cut them open, to do what would otherwise be considered assault, because we do so on their behalf – to save their lives and provide them comfort.

The Susskinds questioned (in 2015) whether this grand bargain could survive a future of “increasingly capable systems” and suggested a future when all 7 of the following models were in use:

  1. The traditional model, ie the grand bargain as it works now. Human professionals providing their services face-to-face on a time-cost basis.
  2. The networked experts model. Specialists work together via online networks. BetterDoctor would be an example of this.
  3. The para-professional model. The para-professional has had less training than the traditional professional but is equipped by their training and support systems to deliver work independently within agreed limits. The medical profession’s battle with this model has recently given rise to the Leng Review.
  4. The knowledge engineering model. A system is made available to users, including a database of specialist knowledge and the modelling of specialist expertise based on experience in a form that makes it accessible to users. Think tax return preparation software or medical self-diagnosis online tools.
  5. The communities of experience model, eg Wikipedia.
  6. The embedded knowledge model. Practical expertise built into systems or physical objects, eg intelligent buildings which have sensors and systems that test and regulate the internal environment of a building.
  7. The machine-generated model. Here practical expertise is originated by machines rather than by people. This book was written in 2015 so the authors did not know about large language models then, but these would be an obvious example.

What all of these alternative models had in common of course was the potential to no longer need the future traditional model professional.

There is another contract which has never been written down: that between the young and the old in society. Companies are jumping the gun on how the grand bargain is likely to be re-framed and adopting systems before all of the evidence is in. As Doctorow said in March (ostensibly about Musk’s DOGE when it was in full firing mode):

AI can’t do your job, but an AI salesman (Elon Musk) can convince your boss (the USA) to fire you and replace you (a federal worker) with a chatbot that can’t do your job

What strikes me is that the boss in question is generally at least 55. As one consultancy has noted:

Notably, the youngest Baby Boomers turned 60 in 2024—the average age of senior leadership in the UK, particularly for non-executive directors. Executive board directors tend to be slightly younger, averaging around 55.

Assume there was some kind of written contract between young and old that gave the older generation the responsibility to be custodian of all of the benefits of living in a civilised society while they were in positions of power so that life was at least as good for the younger generation when they succeeded them.

Every time a Baby Boomer argues that the state pension age increases because “we” cannot afford it, he or she is arguing both for the worker who will then be paying for his or her pension to continue to do so and that they should accept a delay in when they will get their quid pro quo, with no risk that the changes will be applied to the Boomer as all changes are flagged many years in advance. That contract would clearly be in breach. Every Boomer graduate from more than 35 years ago who argues for the cost of student loans to increase when they never paid for theirs would break such a contract. Every Boomer homeowner who argues against any measure which might moderate the house price inflation which they benefit from in increased equity would break such a contract. And of course any such contract worth its name would require strenuous efforts to limit climate change.

And a Boomer who removes a graduate job to temporarily support their share price (so-called rightsizing) in favour of a necessarily not-yet-fully-tested (by which I mean more than testing the software but also all of the complicated network of relationships required to make any business operate successfully) system then the impact of that temporary inflation of the share price on executive bonuses is being valued much more highly than both the future of the business and of the generation that will be needed to run it.

This is not embracing the future so much as selling a futures contract before setting fire to the actual future. And that is not a contract so much as an abusive relationship between the generations.

On Wednesday last week the report from the Leng Review into the safety and effectiveness of physician associates (PAs) and anaesthesia associates (AAs) was published. Although it concluded that:

Research on the safety and effectiveness of PAs and AAs was limited, generally of low quality and either inconclusive or demonstrated a mixed picture.

This apparently did not prevent Professor Leng from feeling able to go right ahead and make 18 recommendations. Neither did it prevent NHS England announcing the same day that it would be expecting all PAs and AAs in the NHS to immediately:

  1. Take on the new names for their roles of physician assistant and physician assistant in anaesthesia respectively;
  2. No longer triage patients or see “undifferentiated” patients.

The rationale for the first of these was the fear that PAs and AAs were being confused with doctors. That this has been addressed by immediately making PAs and AAs much more confusable with each other is just one of the many hilarious things about this report. They also appear to have forgotten to let the General Medical Council (GMC) know, as their website still looks like this:

Then there is the meticulously recorded bile directed at PAs and AAs and their capabilities throughout what is described all over the website as an “independent” report. There were several charts of the opinions of PAs and AAs about their ability to carry out their duties compared to those of doctors. Here is one of them:

The fact I feel able to describe this as mostly bile is the template job descriptions at Appendix 5 of the Leng report. The one for PAs in secondary care includes the following principal duties and responsibilities:

  • carry out assessments of patient health by interviewing patients and performing
    physical examination including obtaining and updating medical histories (looks like B and E);
  • order and perform agreed diagnostic tests including laboratory studies and
    interpret test results (looks like J);
  • perform basic therapeutic procedures by administering all injections and
    immunisations, suturing and managing wounds and infections (looks like M);
  • help to develop other members of the multidisciplinary team by providing
    information and educational opportunities as appropriate (looks like L).

So even the Leng Review appears to have concluded that many of the doctors’ opinions polled here are ridiculous.

Of course I am lumping all doctors together here because the Leng Review does for the most part. There is one sentence where it is admitted that senior doctors, including GPs, tended to be more positive than resident doctors, but this is not really quantified.

The Leng Review will not be the last of its kind. It has taken up the concerns of a threatened profession and worked with them to connive in the othering of another sub-profession (set up, as admitted in the Leng Review report itself, by the Department of Health under, in the case of PAs, a competency framework in conjunction with the Royal Colleges of Physicians and General Practitioners) rather than tackle the actual threats the profession faces. As Roy Lilley wrote:

The BMA can stand in the way, or stand at the front, shaping how technology and new roles like PAs can improve care, close gaps, and make healthcare safer and smarter.

History teaches us that you can’t halt progress by breaking the machinery or driving new careers into a cul-de-sac.

So why are the doctors, particularly resident doctors (formerly known as junior doctors), so offended by the use of PAs and AAs in the NHS? Is it really about safety and effectiveness? Or is it that the British Medical Association (BMA) has finally lost the trust of its more junior members after years of inadequate representation and now is throwing its weight around with the campaign against PAs and AAs and now the resident doctor strike in a desperate attempt to convince them that the reason they are paid less than PAs and can’t get a job after graduation is not the fault of the BMA, but that of the Government, PAs and AAs?

As the Leng Review admits:

Since the early 2000s, and in response to increasing workforce pressures, there has been a growing recognition of the PA role across the globe as a flexible way to address doctor shortages and improve access to healthcare. Today, PAs or their equivalents are employed in over 50 countries, although the role is often adapted locally to meet specific healthcare system needs.

Is it perhaps this very flexibility which is the threat here, when NHS England are already reviewing postgraduate medical training due in large part to resident doctors’ “concerns and frustrations with their training experience”?

The doctors are not the only threatened profession. According to The Observer this week:

The big four accounting firms – Deloitte, EY, PricewaterhouseCoopers and KPMG – posted 44% fewer jobs for graduates this year compared with 2023.

These are the big beasts for finance and actuarial graduates and tend to set the market for everyone else, so these are big changes. Ian Pay of the ICAEW’s quote from the article is even more alarming:

Historically, accountancy firms have typically had a pyramid structure – wide base, heavy graduate recruitment. Firms are now starting to talk about a ‘diamond model’ with a wide middle tier of management because, ultimately, AI is not sophisticated enough yet to make those judgment calls.

A diamond model? That surely only makes sense for those at partner level currently interested in the purchase of diamonds? Sure enough, the article continues:

Cuts to graduate cohorts since 2023 have ranged from 6% at PwC to 29% at KPMG. According to James O’Dowd, founder of talent adviser Patrick Morgan, these are accompanied by senior employees being paid more and more job offshoring. Up to a third of some firms’ administrative tasks are carried out in countries with lower labour costs such as India and the Philippines.

So what happens when AI is sophisticated enough to make those judgement calls, calls which are often sophisticated forms of pattern spotting and which, quite frankly, AI systems are already much better than humans at in many cases already? Will the diamond model collapse still further into a “T-model” perhaps, with the very senior survivors being paid even more? Don’t expect labour costs in India and the Philippines to remain lower for very long as demand increases from their own economies as well as ours.

And the most important question? What then? Who will the senior employees who seem to be doing so well out of this at the moment be in 20-30 years’ time? Where will they have come from? What experience will they have and how will they have gained it when all the opportunities to do so have been given to the system in the corner which never gets tired, only makes mistakes when it is poorly programmed or fed poor data, and never takes study leave at the financial year end?

So Medicine, Finance and now Law. Richard Susskind has been writing about the impact of AI on Law, and with his son Daniel, on other professions too for some time now. The review of his latest book, How To Think About AI, has the reviewer wondering “Where has Reassuring Richard gone?”. In his latest book, Susskind says:

“Pay heed, professionals – the competition that kills you won’t look like you.”

So probably a threatened profession there too then.

In the 1830s and 1840s, according to Christopher Clark’s excellent Revolutionary Spring, the new methods of production led to “the emergence of a non-specialised, mobile labour force whose ‘structural vulnerability’ made it more likely that they would experience the most wretched poverty at certain points in their lives.” The industrialised economies changed beyond recognition and the guilds representing workers, with skills the need for which were being automated away, retreated to become largely ceremonial.

Then the divisions were those of class. This time they appear to be those of generation. Early career professionals are seeing their pay, conditions and status under threat as their more senior colleagues protect their own positions at their expense.

It remains to be seen what will happen to our threatened professions, but it seems unlikely that they will survive in their current forms any more than the jobs of their members will.

Last week I read The Million Pound Bank Note by Mark Twain and Brewster’s Millions by George Barr McCutcheon, from 1893 and 1902 respectively. Both have been made into films several times: the Mark Twain short story was first made into a silent movie by the great Alexander Korda in 1916, although the best known adaptations were the one starring Gregory Peck in 1954 and Trading Places (starring Eddie Murphy) in 1983 (which included elements of both The Million Pound Bank Note and Mark Twain’s novel The Prince and the Pauper); Cecil B DeMille was the first to attempt a film adaptation of Brewster’s Millions (from the earlier play) in 1914, with the best known adaptation being Walter Hill’s 1985 movie starring Richard Pryor (movie poster shown above).

Both stories were written before the First World War and it is interesting to see when each has been revived with new adaptations. In particular, although an early attempt was made to film Twain’s story, noone attempted it again until after the second world war, whereas there was a new adaptation of Brewster during the very interesting period between 1920 and 1922 when the first international financial conferences were being held in Brussels and Genoa to establish an international consensus for policies where “individuals had to work harder, consume less, expect less from the government as a social actor, and renounce any form of labour action that would impede the flow of production.” The aim was to return to a pre World War I economic orthodoxy and therefore remove what would be very painful economic measures for most people from the political sphere and into the sphere of “economic science”. In other words, it was a time when the political elite were trying to change the rules of the game.

This may be because Twain’s story, about a man who is given a million pound note and is feted by everyone he meets as a consequence and never has to spend it, winning a bet between the two men who gave him it as a consequence, was seen as a rather slight tale. Interestingly an American TV adaptation and the Gregory Peck film a few years later came out around the time when the Bank of England actually first issued such notes (called Giants) in 1948, which also relied on the power of people knowing they were there rather than ever having to use them.

The rules of the game certainly vary considerably across the Brewster adaptations: DeMille in 1914 was very respectful of the original but by 1921 the $7 million dollars had shrunk to $4 million. By 1926 in Miss Brewster’s Millions, Polly Brewster must spend $1 million dollars in 30 days to inherit $5 million. This was the point where Twenty20 fortune dissipation appears to have supplanted the Test Match variety. In 1935 a British version had Brewster needing to spend £500,000 in 6 months to inherit £6 million. In 1945 Brewster must spend $1 million dollars within 60 days to inherit $7 million. By 1954 the first Telugu adaptation has him spending ₹1 lakh in 30 days which, by 1985, has inflated to ₹25 lakh.

Later in 1985, the Richard Pryor film requires Brewster to spend $30 million within 30 days to inherit $300 million, with the tweak that he is given the option to take $1 million upfront, which for the sake of the movie he doesn’t. There have since been five further adaptations reflecting the globalisation of the ideas in the story (three from India, one from Brazil and one from China) before the sequel to the Richard Pryor film last year.

What is striking about both stories is how, although supposedly about financial transactions, albeit of a rather unusual kind, they are in fact all about how people behave around the display of money. In Twain’s tale, Henry Adams is transformed from being perceived as a beggar to being assumed to be an eccentric millionaire as a result of producing the note.

In the Brewster story, Monty Brewster has to spend the million dollars he has been left by his grandfather within a year so that he has no assets left in order to claim the seven million dollars left to him by an uncle on this condition. The original story explains the strange condition (something the Richard Pryor film doesn’t do as far as I can recall) as being due to his uncle hating his grandfather so much (due to his grandfather’s refusal to accept his uncle’s sister’s marriage). The uncle therefore wanted “to preclude any possible chance of the mingling of his fortune with the smallest portion of Edwin P Brewster’s”.

The problem for Monty is that he is not allowed to tell anyone of the condition, and therefore it is the difficulties the behaviour he then has to adopt causes him with New York high society that is the subject of the story. There are dinners and cruises and carnivals and holiday homes all bankrolled by Brewster for himself and whoever will journey with him, during which he falls in love and then out of love with one woman and then falls in love with the woman he had grown up alongside. Things normally regarded as good luck, like winning a bet or making a profitable investment, become bad luck for Monty.

By the end of the year, and very close to spending the whole million with nothing to show for it, he returns from a transatlantic cruise where he had been kidnapped by his friends at one stage to prevent him sailing to South Africa, to find himself spurned by the very society he had tried so hard to cultivate:

With the condemnation of his friends ringing in his troubled brain, with the sneers of acquaintances to distress his pride, with the jibes of the comic papers to torture him remorselessly, Brewster was fast becoming the most miserable man in New York. Friends of former days gave him the cut direct, clubmen ignored him or scorned him openly, women chilled him with the iciness of unspoken reproof, and all the world was hung with shadows. The doggedness of despair kept him up, but the strain that pulled down on him was so relentless that the struggle was losing its equality. He had not expected such a home-coming.

After a bit of a scare that the mysterious telegram correspondent Swearengen Jones, who held the 7 million and was assessing his performance, had disappeared, everything comes right for Monty in the end and he marries Peggy who had agreed to do so even when she thought him penniless.

And we are left to assume that everything in the previous paragraph is reversed in the same way as in The Million Pound Bank Note on being able to display wealth once more.

There is a lot of plot in the Brewster story in particular, a lot of which does not amount to much but keeps Monty Brewster feverishly busy throughout.

These two in many ways ridiculous stories, written as they are just as economics is trying to establish itself as a science and ultimately the discipline that shapes our current societies, I think reveal quite a lot about the nature of money amongst people who have a lot of it. Neither Henry nor Monty (apart from an opening twenty four hours for Henry and a scene revolving around a pear in the gutter after a night sleeping rough) experience hunger or the absence of anywhere to sleep at any point. Their concern for money seems to be entirely about social position, the respect of who they regard as their peers and being able to marry the women they have set their hearts on. In other words, money is not about money for these protagonists, it is about status.

It seems to me that almost the entire edifice that we call economics now has possibly been constructed by people in this position. Is this why money creation is represented in so many economic models via constructions clearly at odds with the actual activities of banks (one of many pieces by Steve Keen demonstrating this problem here), and why ideas such as loanable funds and the money multiplier, persist in economics education? Perhaps the original architects of these economic theories did not need money to live, as much as they needed the respect of who they saw as their peers.

David Graeber often used to point out how much more time people at the bottom of society spent thinking about people at the top than the people at the top spent thinking about them. Is this at the heart of the problem?

Of course we do still have some social mobility. A relatively small number of people from poor backgrounds can still enter influential professions. Some of them have even become economists! Of course the very process of becoming a professional is designed to distance you from your origins: years of immersion in a very academic discipline, requiring total concentration and dedication to internalising enough of the professional “truths” learnt so as to be assessed as qualified to practise, normally while engaged in highly intensive work alongside more senior people for who these truths have already been securely internalised.

And then once there you are in the Monty Brewster situation, so insecure about your position within this new society you have joined that you will do whatever it takes to maintain it. You are “upwardly mobile”. Your families are proud that you are “getting on” and doing better, certainly in terms of income and professional respect, than they did. There is no serious challenge to this path other than its difficulty, which again creates a massive sunk cost in your mind when considering alternatives. And it is a path which is invariably described as upward.

Meanwhile the societies we have constructed around these economic edifices also have a lot of plot, a lot of which does not amount to very much but it keeps us all feverishly busy most of the time.

Happy new year to everyone who reads this blog! I am planning for there to be quite a lot more activity here in 2025, moving from an average of one article a month to at least weekly. There should be more cartoons too – Pinhead and Spikes even made it to our Christmas cake this year.

There is a lot I want to write about this year. Expect some or all of the following themes in the next few months (in no particular order):

  • Some examples using Steve Keen’s Ravel software to demonstrate how Government debt is not the constraint they think it is.
  • Extending Naomi Alderman’s argument in The Future that we could get rid of the Tech Bros and not miss them, effectively upending Ayn Rand’s ideas in Atlas Shrugged. They are not key workers.
  • Keynes’ argument that, with the future so uncertain, we should not sacrifice people in the present to our models of it.
  • Spiegelhalter on the four types of luck, which cuts away at the meritocracy argument for distributing wealth.
  • How the professions have become a way of solidifying and enabling the massively uneven distribution we see. Have they outgrown their usefulness in their current form, just like the guilds did?
  • How the choice for providing public goods appears to boil down to public ownership or private monopoly – with accompanying Technofeudalism replacing capitalism. Why are we so much more relaxed about private monopolies than we were 100 years ago, when it accelerates inequalities so much?
  • The relationship between worldbuilding in science fiction and people living in their own models in the policy making world. Great example of this just this morning in the FT.

So plenty to do. If this sounds interesting to you, please stick with the blog, which will not be going to Substack and will not be charging a subscription. If it sounds really interesting to you, tell a friend! Will be in touch again soon.