Wallace & Gromit: Vengeance Most Fowl models on display in Bristol. This file is licensed under the Creative Commons Attribution-Share Alike 4.0 International license.

I have been watching Daniel Susskind’s lectures on AI and the future of work this week: Automation Anxiety was delivered in September and The Economics of Work and Technology earlier this week. The next in the series, entitled Economics and Artificial Intelligence is scheduled for 13 January. They are all free and I highly recommend them for their great range of source material presented.

In my view the most telling graph, which featured in both lectures, was this one:

Original Source: Daniel Susskind A World Without Work

Susskind extended the usual concept of the ratio between average college and university graduate salaries to those of school leavers to include the equivalent ratio of craftsmen to labourers which then gives us data back to 1220. There are two big collapses in this ratio in the data: that following the Black Death (1346-1353), which may have killed 50% of Europe’s 14th century population, and the Industrial Revolution (which slow singularity started around 1760 and then took us through the horrors of the First World War and the Great Depression before the graph finally picks up post Bretton Woods).

As Susskind shows, the profits from the Industrial Revolution were not going to workers:

Source: The Technology Trap, Carl Benedikt Frey

So how is the AI Rush comparing? Well Susskind shared another graph:

Source: David Autor Work of the Past, Work of the future

This, from 2019, introduced the idea that the picture is now more complex than high-skilled and low-skilled workers, now there is a middle. And, as Autor has set out more recently, the middle is getting squeezed:

Key dynamics at play include:

  • Labor Share Decline: OECD data reveal a 3–5 percentage point drop in labor’s share of income in sectors most exposed to AI, a trend likely to accelerate as automation deepens.
  • Wage Polarization: The labor market is bifurcating. On one end, high-complexity “sense-making” roles; on the other, low-skill service jobs. The middle is squeezed, amplifying both political risk and regulatory scrutiny.
  • Productivity Paradox 2.0: Despite the promise of AI-driven efficiency, productivity gains remain elusive. The real challenge is not layering chatbots atop legacy processes, but re-architecting workflows from the ground up—a costly and complex endeavor.

For enterprise leaders, the implications are profound. AI is best understood not as a job destroyer, but as a “skill-lowering” platform. It enables internal labor arbitrage, shifting work toward judgment-intensive, context-rich tasks while automating the rest. The risk is not just technological—it is deeply human. Skill depreciation now sits alongside cyber and climate risk on the board agenda, demanding rigorous workforce-reskilling strategies and a keen eye on brand equity as a form of social license.

So, even if the overall number of jobs may not be reduced, the case being made is that the average skill level required to carry them out will be. As Susskind said, the Luddites may have been wrong about the spinning jenny replacing jobs, but it did replace and transform tasks and its impact on workers was to reduce their pay, quality of work, status as craftsmen and economic power. This looks like the threat being made by employers once again, with real UK wages already still only at the level they were at in 2008:

However this is where I part company with Susskind’s presentation, which has an implicit inevitability to it. The message is that these are economic forces we can’t fight against. When he discusses whether the substituting force (where AI replaces you) or the complementing force (where AI helps you to be more productive and increases the demand for your work) will be greater, it is almost as if we have no part to play in this. There is some cognitive dissonance when he quotes Blake, Engels, Marx and Ruskin about the horrors of living through such times, but on the whole it is presented as just a natural historical process that the whole of the profits from the massive increases in productivity of the Industrial Revolution should have ended up in the pockets of the fat guys in waistcoats:

Richard Arkwright, Sir Robert Peel, John Wilkinson and Josiah Wedgwood

I was recently at Cragside in Northumberland, where the arms inventor and dealer William Armstrong used the immense amount of money he made from selling big guns (as well as big cranes and the hydraulic mechanism which powers Tower Bridge) to decking out his house and grounds with the five artificial lakes required to power the world’s first hydro-electric lighting system. His 300 staff ran around, like good reverse-centaurs, trying to keep his various inventions from passenger lifts to an automated spit roast from breaking down, so that he could impress his long list of guests and potential clients to Cragside, from the Shah of Persia to the King of Siam and two future Prime Ministers of Japan. He made sure they were kept running around with a series of clock chimes throughout the day:

However, with some poetic irony, the “estate regulator” is what has since brought the entire mechanism crashing to a halt:

Which brings me to Wallace and Gromit. Wallace is the inventor, heedless of the impact of his inventions on those around him and especially on his closest friend Gromit, who he regularly dumps when he becomes inconvenient to his plans. Gromit just tries to keep everything working.

Wallace is a cheese-eating monster who cannot be assessed purely on the basis of his inventions. And neither can Armstrong, Arkwright, Peel, Wilkinson or Wedgwood. We are in the process of allowing a similar domination of our affairs by our new monsters:

Meta CEO Mark Zuckerberg beside Amazon CEO Jeff Bezos and his fiancée (now wife) Lauren, Google CEO Sundar Pichai and Elon Musk at President Trump’s 2nd Inauguration.

Around half an hour into his second lecture, Daniel Susskind started talking about pies. This is the GDP pie (Susskind has also written a recent book on Growth: A Reckoning, which argues that GDP growth can go on forever – my view would be closer to the critique here from Steve Keen) which, as Susskind says, increased by a factor of 113 in the UK between 1700 and 2000. But, as Steve Keen says:

The statistics strongly support Jevons’ perspective that energy—and specifically, energy from coal—caused rising living standards in the UK (see Figure 2). Coal, and not a hypothesised change in culture, propelled the rise in living standards that Susskind attributes to intangible ideas.

Source: https://www.themintmagazine.com/growth-some-inconvenient-truths/

Susskind talks about the productivity effect, he talks about the bigger pie effect and then he talks about the changing pie effect (ie changes to the types of work we do – think of the changes in the CPI basket of goods and services) as ways in which jobs are created by technological change. However he has nothing to say about just giving less of the pie to the monsters. Instead for Susskind the AI Rush is all about clever people throwing 10 times the amount of money at AI as was directed at the Manhattan Project and the heads of OpenAI, Anthropic and Google DeepMind stating that AI will replace humans in all economically useful tasks in 10 years, a claim which he says we should take seriously. Cory Doctorow, amongst others, disagrees. In his latest piece, When AI prophecy fails, he has this to say about why companies have reduced recruitment despite the underperformance of AI systems to date:

All this can feel improbable. Would bosses really fire workers on the promise of eventual AI replacements, leaving themselves with big bills for AI and falling revenues as the absence of those workers is felt?

The answer is a resounding yes. The AI industry has done such a good job of convincing bosses that AI can do their workers’ jobs that each boss for whom AI fails assumes that they’ve done something wrong. This is a familiar dynamic in con-jobs.

The Industrial Revolution had a distribution problem which gave birth to Chartism, Marxism, the Trades Union movement and the Labour Party in the UK alone. And all of that activity only very slowly chipped away at the wealth share of the top 10%:

Source: https://equalitytrust.org.uk/scale-economic-inequality-uk/

However the monsters of the Industrial Revoution did at least have solid proof that they could deliver what they promised. You don’t get more concrete a proof of concept than this after all:

View on the Thames and the opening Tower Bridge, London, from the terraces at Wapping High Street, at sunset in July 2013, Bert Seghers. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.

The AI Rush has a similar distribution problem, but it is also the first industrial revolution since the global finance industry decoupled from the global real economy. So the wealth share of the Top 10% isn’t going back up fast enough? No problem. Just redistribute the money at the top even further up:

What the monsters of the AI Rush lack is anything tangible to support their increasingly ambitious assertions. Wallace may be full of shit. And the rest of us can all just play a Gromit-like support role until we find out one way or the other or concentrate on what builds resilient communities instead.

Whether you think the claims for the potential of AI are exaggerated; or that the giant bet on it that the US stock market has made will end in an enormous depression; or that the energy demands of this developing technology will be its constraining force ultimately; or that we are all just making the world a colder place by prioritising systems, however capable, over people: take your pick as a reason to push back against the AI Rush. But my bet would be on the next 10 years not being dominated by breathless commentary on the exploits of Tech Bros.

In 2017, I was rather excitedly reporting about ideas which were new to me at the time regarding how technology or, as Richard and Daniel Susskind referred to it in The Future of the Professions, “increasingly capable machines” were going to affect professional work. I concluded that piece as follows:

The actuarial profession and the higher education sector therefore need each other. We need to develop actuaries of the future coming into your firms to have:

  • great team working skills
  • highly developed presentation skills, both in writing and in speech
  • strong IT skills
  • clarity about why they are there and the desire to use their skills to solve problems

All within a system which is possible to regulate in a meaningful way. Developing such people for the actuarial profession will need to be a priority in the next few years.

While all of those things are clearly still needed, it is becoming increasingly clear to me now that they will not be enough to secure a job as industry leaders double down.

Source: https://www.ft.com/content/99b6acb7-a079-4f57-a7bd-8317c1fbb728

And perhaps even worse than the threat of not getting a job immediately following graduation is the threat of becoming a reverse-centaur. As Cory Doctorow explains the term:

A centaur is a human being who is assisted by a machine that does some onerous task (like transcribing 40 hours of podcasts). A reverse-centaur is a machine that is assisted by a human being, who is expected to work at the machine’s pace.

We have known about reverse-centaurs since at least Charlie Chaplin’s Modern Times in 1936.

By Charlie Chaplin – YouTube, Public Domain, https://commons.wikimedia.org/w/index.php?curid=68516472

Think Amazon driver or worker in a fulfillment centre, sure, but now also think of highly competitive and well-paid but still ultimately human-in-the-loop kinds of roles being responsible for AI systems designed to produce output where errors are hard to spot and therefore to stop. In the latter role you are the human scapegoat, in the phrasing of Dan Davies, “an accountability sink” or in that of Madeleine Clare Elish, a “moral crumple zone” all rolled into one. This is not where you want to be as an early career professional.

So how to avoid this outcome? Well obviously if you have other options to roles where a reverse-centaur situation is unavoidable you should take them. Questions to ask at interview to identify whether the role is irretrievably reverse-centauresque would be of the following sort:

  1. How big a team would I be working in? (This might not identify a reverse-centaur role on its own: you might be one of a bank of reverse-centaurs all working in parallel and identified “as a team” while in reality having little interaction with each other).
  2. What would a typical day be in the role? This should smoke it out unless the smokescreen they put up obscures it. If you don’t understand the first answer, follow up to get specifics.
  3. Who would I report to? Get to meet them if possible. Establish whether they are technical expert in the field you will be working in. If they aren’t, that means you are!
  4. Speak to someone who has previously held the role if possible. Although bear in mind that, if it is a true reverse-centaur role and their progress to an actual centaur role is contingent on you taking this one, they may not be completely forthcoming about all of the details.

If you have been successful in a highly competitive recruitment process, you may have a little bit of leverage before you sign the contract, so if there are aspects which you think still need clarifying, then that is the time to do so. If you recognise some reverse-centauresque elements from your questioning above, but you think the company may be amenable, then negotiate. Once you are in, you will understand a lot more about the nature of the role of course, but without threatening to leave (which is as damaging to you as an early career professional as it is to them) you may have limited negotiation options at that stage.

In order to do this successfully, self knowledge will be key. It is that point from 2017:

  • clarity about why they are there and the desire to use their skills to solve problems

To that word skills I would now add “capabilities” in the sense used in a wonderful essay on this subject by Carlo Iacono called Teach Judgement, Not Prompts.

You still need the skills. So, for example, if you are going into roles where AI systems are producing code, you need to have sufficiently good coding skills yourself to create a programme to check code written by the AI system. If the AI system is producing communications, your own communication skills need to go beyond producing work that communicates to an audience effectively to the next level where you understand what it is about your own communication that achieves that, what is necessary, what is unnecessary, what gets in the way of effective communication, ie all of the things that the AI system is likely to get wrong. Then you have a template against which to assess the output from an AI system, and for designing better prompts.

However specific skills and tools come and go, so you need to develop something more durable alongside them. Carlo has set out four “capabilities” as follows:

  1. Epistemic rigour, which is being very disciplined about challenging what we actually know in any given situation. You need to be able to spot when AI output is over-confident given the evidence, or when a correlation is presented as causation. What my tutors used to refer to as “hand waving”.
  2. Synthesis is about integrating different perspectives into an overall understanding. Making connections between seemingly unrelated areas is something AI systems are generally less good at than analysis.
  3. Judgement is knowing what to do in a new situation, beyond obvious precedent. You get to develop judgement by making decisions under uncertainty, receiving feedback, and refining your internal models.
  4. Cognitive sovereignty is all about maintaining your independence of thought when considering AI-generated content. Knowing when to accept AI outputs and when not to.

All of these capabilities can be developed with reflective practice, getting feedback and refining your approach. As Carlo says:

These capabilities don’t just help someone work with AI. They make someone worth augmenting in the first place.

In other words, if you can demonstrate these capabilities, companies who themselves are dealing with huge uncertainty about how much value they are getting from their AI systems and what they can safely be used for will find you an attractive and reassuring hire. Then you will be the centaur, using the increasingly capable systems to improve your own and their productivity while remaining in overall control of the process, rather than a reverse-centaur for which none of that is true.

One sure sign that you are straying into reverse-centaur territory is when a disproportionate amount of your time is spent on pattern recognition (eg basing an email/piece of coding/valuation report on an earlier email/piece of coding/valuation report dealing with a similar problem). That approach was always predicated on being able to interact with a more experienced human who understood what was involved in the task at some peer review stage. But it falls apart when there is no human to discuss the earlier piece of work with, because the human no longer works there, or a human didn’t produce the earlier piece of work. The fake it until you make it approach is not going to work in environments like these where you are more likely to fake it until you break it. And pattern recognition is something an AI system will always be able to do much better and faster than you.

Instead, question everything using the capabilities you have developed. If you are going to be put into potentially compromising situations in terms of the responsibilities you are implicitly taking on, the decisions needing to be made and the limitations of the available knowledge and assumptions on which those decisions will need to be based, then this needs to be made explicit, to yourself and the people you are working with. Clarity will help the company which is trying to use these new tools in a responsible way as much as it helps you. Learning is going to be happening for them as much as it is for you here in this new landscape.

And if the company doesn’t want to have these discussions or allow you to hamper the “efficiency” of their processes by trying to regulate them effectively? Then you should leave as soon as you possibly can professionally and certainly before you become their moral crumple zone. No job is worth the loss of your professional reputation at the start of your career – these are the risks companies used to protect their senior people of the future from, and companies that are not doing this are clearly not thinking about the future at all. Which is likely to mean that they won’t have one.

To return to Cory Doctorow:

Science fiction’s superpower isn’t thinking up new technologies – it’s thinking up new social arrangements for technology. What the gadget does is nowhere near as important as who the gadget does it for and who it does it to.

You are going to have to be the generation who works these things out first for these new AI tools. And you will be reshaping the industrial landscape for future generations by doing so.

And the job of the university and further education sectors will increasingly be to equip you with both the skills and the capabilities to manage this process, whatever your course title.

What comes next in the following sequence: 650, 400, 300, …? More on this in a minute.

I decided to make a little table with the help of the Oxford English Dictionary to summarise the usage of most of the words Eric Hobsbawm listed at the beginning of his Age of Revolution, 1789-1848. I have highlighted all of the meanings not in use until at least 1800 below:

IndustrySince 1500 it has had a meaning of productive work, trade, or manufacture. In later use esp.: manufacturing and production carried out on a commercial basis, typically organized on a large scale and requiring the investment of capital.
Since 1801–Manufacturing or production, and those involved in it, regarded as an entity, esp. owners or managers of companies, factories, etc., regarded as influential figures, esp. with regard to investment in an economy.
IndustrialistSince 1839 to denote a person engaged in or connected with industry
FactorySince 1618 A location or premises in which a product is manufactured; esp. a building or range of buildings with plant for the manufacture or assembly of goods or for the processing of substances or materials
Middle Class Since 1654 A class of society or social grouping between an upper and a lower (or working) class, usually regarded as including professional and business people and their families; (in singular and plural) the members of such a class. However only since 1836 Of, relating to, or designating the middle class. And only since 1846 Characteristic of the middle class; having the characteristics of the middle classes. Esp. in middle-class morality. Frequently derogatory
Working ClassSince 1757 A class of society or social grouping consisting of people who are employed for wages, esp. in unskilled or semi-skilled manual or industrial work, and their families, and which is typically considered the lowest class in terms of economic level and social status; (with the, in singular and plural) the members of such a class. However only since 1833 Of, belonging to, or characteristic of the working class.
CapitalistSince 1774 A person who possesses capital assets esp. one who invests these esp. for profit in financial and business enterprises. Also: an advocate of capitalism or of an economic system based on capitalism.
CapitalismSince 1833 The practices or principles of capitalists; the dominance of capitalists in financial and business enterprises; esp. an economic system based on wage labour in which the means of production is controlled by private or corporate interests for the purpose of profit, with prices determined largely by competition in a free market.
SocialismSince 1833 Frequently with capital initial. A theory or system of social organization based on state or collective ownership and regulation of the means of production, distribution, and exchange for the common benefit of all members of society; advocacy or practice of such a system, esp. as a political movement. Now also: any of various systems of liberal social democracy which retain a commitment to social justice and social reform, or feature some degree of state intervention in the running of the economy.
MarxismSince 1883 The ideas, theories, and methods of Karl Marx; esp. the political and economic theories propounded by Marx together with Friedrich Engels, later developed by their followers to form the basis for the theory and practice of communism.
AristocracySince 1561 it has had a meaning of In the literal sense of the Greek: The government of a state by its best citizens. Since 1651 The class to which such a ruling body belongs, a patrician order; the collective body of those who form a privileged class with regard to the government of their country; the nobles. The term is popularly extended to include all those who by birth or fortune occupy a position distinctly above the rest of the community, and is also used figuratively of those who are superior in other respects.
RailwaySince 1681 A roadway laid with rails (originally of wood, later also of iron or steel) along which the wheels of wagons or trucks may run, in order to facilitate the transport of heavy loads, originally and chiefly from a colliery; a wagonway. Since 1822 (despite the first railway not being opened until 1825) A line or track typically consisting of a pair of iron or steel rails, along which carriages, wagons, or trucks conveying passengers or goods are moved by a locomotive engine or other powered unit. Also: a network or organization of such lines; a company which owns, manages, or operates such a line or network; this form of transportation.
NationalitySince 1763 National origin or identity; (Law) the status of being a citizen or subject of a particular state; the legal relationship between a citizen and his or her state, usually involving obligations of support and protection; a particular national identity. Also: the legal relationship between a ship, aircraft, company, etc., and the state in which it is registered. Since 1832 group of persons belonging to a particular nation; a nation; an ethnic or racial group.
ScientistSince 1834 A person who conducts scientific research or investigation; an expert in or student of science, esp. one or more of the natural or physical sciences.
EngineerSince 1500 Originally: a person who designs or builds engines or other machinery. Subsequently more generally: a person who uses specialized knowledge or skills to design, build, and maintain complicated equipment, systems, processes, etc.; an expert in or student of engineering. Frequently with distinguishing word. From the later 18th cent. onwards mainly with reference to mechanical, chemical, electrical, and similar processes; later (chiefly with distinguishing word) also with reference to biological or technological systems. Since 1606 A person whose profession is the designing and constructing of works of public utility, such as bridges, roads, canals, railways, harbours, drainage works, etc.
ProletariatSince 1847 Wage earners collectively, esp. those who have no capital and who depend for subsistence on their daily labour; the working classes. Esp. with reference to Marxist theory, in which the proletariat are seen as engaged in permanent class struggle with the bourgeoisie, or with those who own the means of production.
Crisis Since 1588 Originally: a state of affairs in which a decisive change for better or worse is imminent; a turning point. Now usually: a situation or period characterized by intense difficulty, insecurity, or danger, either in the public sphere or in one’s personal life; a sudden emergency situation. Also as a mass noun, esp. in in crisis.
UtilitarianSince 1802 Of philosophy, principles, etc.: Consisting in or based upon utility; spec. that regards the greatest good or happiness of the greatest number as the chief consideration or rule of morality. Since 1830 Of or pertaining to utility; relating to mere material interests. Since 1847 In quasi-depreciative use: Having regard to mere utility rather than beauty, amenity, etc.
StatisticsSince 1839 The systematic collection and arrangement of numerical facts or data of any kind; (also) the branch of science or mathematics concerned with the analysis and interpretation of numerical data and appropriate ways of gathering such data.
SociologySince 1842 The study of the development, structure, and functioning of human society. Since 1865 The sociological aspects of a subject or discipline; a particular sociological system.
JournalismSince 1833 The occupation or profession of a journalist; journalistic writing; newspapers and periodicals collectively.
Ideology By 1796 (a) The study of ideas; that branch of philosophy or psychology which deals with the origin and nature of ideas. (b) spec. The system introduced by the French philosopher Étienne Condillac (1715–80), according to which all ideas are derived from sensations. By 1896 A systematic scheme of ideas, usually relating to politics, economics, or society and forming the basis of action or policy; a set of beliefs governing conduct. Also: the forming or holding of such a scheme of ideas.
Strike Since 1810 A concerted cessation of work on the part of a body of workers, for the purpose of obtaining some concession from the employer or employers. Formerly sometimes more explicitly strike of work. Cf. strike v. IV.24, IV.24b Phrase, on strike, also (U.S.) on a strike. Frequently with preceding qualifying word, as general strike, outlaw strike, selective strike, sit-down strike, stay-away strike, stay-down strike, stay-in strike, sympathetic strike, wildcat strike: see under the first elements. Also figurative. Since 1889 A concerted abstention from a particular economic, physical, or social activity on the part of persons who are attempting to obtain a concession from an authority or to register a protest; esp. in hunger strike, rent strike
PauperismSince 1792 The condition of being a pauper; extreme poverty; = pauperdom n. Since 1807 The existence of a pauper class; poverty, with dependence on public relief or charity, as an established fact or phenomenon in a society. Now chiefly historical.
Source: https://www.oed.com/dictionary/ (subscription needed for full access)

Now try and imagine having a conversation about politics, economics, your job, the news, or even what you watched last night on TV without using any of these words. Try and imagine any of our politicians getting through an interview of any length without resorting to industry, ideology, statistics, nationality or crisis. Let’s call us now Lemmy (Late Modern) and us then Emily (Early Modern):

Lemmy: We need to send back people who arrive here illegally if they are a different nationality.

Emily: What’s a nationality?

Lemmy: Failing to do so is based on woke ideology.

Emily: What’s an ideology? And what has my state of wakefulness got to do with it?

Lemmy: This is a crisis.

Emily: Is that a good crisis or a bad crisis?

Lemmy: All crises are bad.

You get the idea.

The 1700s are divided from us by a political and economic language which would have been almost unrecognisable to the people who lived then.

However the other thing that occurs to me is that 1800 is quite a while ago now. The approximate date boundaries of the various iterations of English are often presented as follows:

Source: https://www.myenglishlanguage.com/history-of-english/

Back to my sequence. We are up to 225 years now since the last major shift. So why do we still base our political and economic discussions on the language of the early 1800s?

Well perhaps only our politicians and the people who volunteer to be in the Question Time audience do. As Carlo Iacono puts it brilliantly here in response to James Marriott’s essay The dawn of the post-literate society:

The future Marriott fears, where we’re all reduced to emotional, reactive creatures of the feed, is certainly one possibility. But it’s not inevitable. The teenagers I see who code while listening to philosophy podcasts, who annotate videos with critical commentary, who create elaborate multimedia presentations synthesising dozens of sources: they’re not the degraded shadows of their literate ancestors. They’re developing new forms of intellectual engagement that we’re only beginning to understand.

In the spirit of the slow singularity, perhaps the transition is already happening, but will only be recorded on a timeline when it is more established. Take podcasts, for instance. Ofcom’s latest Media Nations report from 2024 says this:

After a dip in the past couple of years, it seems that 15-24-year-olds are getting back into podcasts, while 35-44s are turning away. Podcasts are still most popular among adults aged 25-34, with weekly reach increasing to 27.9% in the last year. The over-54s remain less likely than average to listen to podcasts, but in contrast to the fluctuation in younger age groups, reach has been steadily increasing among over-54s in the past five years.

It may be that what will be the important language of the next century is already developing out of sight of most politicians and political commentators.

And the people developing it are likely to have just as hard a time holding a conversation with our current rulers as Lemmy is with Emily.

Source: https://pluspng.com/img-png/mixed-economy-png–901.png

Just type “mixed economy graphic” into Google and you will get a lot of diagrams like this one – note that they normally have to pick out the United States for special mention. Notice the big gap between those countries – North Korea, Cuba, China and Russia – and us. It is a political statement masquerading as an economic one.

This same line is used to describe our political options. The Political Compass added an authoritarian/libertarian axis in their 2024 election manifesto analysis but the line from left to right (described as the economic scale) is still there:

Source: https://www.politicalcompass.org/uk2024

So here we are on our political and economic spectrum, where tiny movements between the very clustered Reform, Conservative, Labour and Liberal Democrat positions fill our newspapers and social media comment. The Greens and, presumably if it ever gets off the ground, Your Party are seen as so far away from the cluster that they often get left out of our political discourse. It is an incredibly narrow perspective and we wonder why we are stuck on so many major societal problems.

This is where we have ended up following the “slow singularity” of the Industrial Revolution I talked about in my last post. Our politics coalesced into one gymnasts’ beam, supported by the hastily constructed Late Modern English fashioned for this purpose in the 1800s, along which we have all been dancing ever since, between the market information processors at the “right” end and the bureacratic information processors at the “left” end.

So what does it mean for this arrangement if we suddenly introduce another axis of information processing, ie the large language AI models. I am imagining something like this:

What will this mean for how countries see their economic organisation? What will it mean for our politics?

In 1884, the English theologian, Anglican priest and schoolmaster Edwin Abbott Abbott published a satirical science fiction novella called Flatland: A Romance of Many Dimensions. Abbott’s satire was about the rigidity of Victorian society, depicted as a two-dimensional world inhabited by geometric figures: women are line segments, while men are polygons with various numbers of sides. We are told the story from the viewpoint of a square, which denotes a gentleman or professional. In this world three-dimensional shapes are clearly incomprehensible, with every attempt to introduce new ideas from this extra dimension considered dangerous. Flatland is not prepared to receive “revelations from another world”, as it describes anything existing in the third dimension, which is invisible to them.

The book was not particularly well received and fell into obscurity until it was embraced by mathematicians and physicists in the early 20th century as the concept of spacetime was being developed by Poincaré, Einstein and Minkowski amongst others. And what now looks like a prophetic analysis of the limitations of the gymnasts’ beam economic and political model of the slow singularity has continued to not catch on at all.

However, much as with Brewster’s Millions, the incidence of film adaptations of Flatland give some indication of when it has come back as an idea to some extent. This tells us that it wasn’t until 1965 until someone thought it was a good idea to make a movie of Flatland and then noone else attempted it until an Italian stop-motion film in 1982. There were then two attempts in 2007, which I can’t help but think of as a comment on the developing financial crisis at the time, and a sequel based on Bolland : een roman van gekromde ruimten en uitdijend heelal (which translates as: Sphereland: A Fantasy About Curved Spaces and an Expanding Universe), a 1957 sequel to Flatland in Dutch (which didn’t get translated into English until 1965 when the first animated film came out) by Dionys Burger, in 2012.

So here we are, with a new approach to processing information and language to sit alongside the established processors of the last 200 years or more. Will it perhaps finally be time to abandon Flatland? And if we do, will it solve any of our problems or just create new ones?

In 2017 I posted an article about how the future for actuaries was starting to look, with particular reference to a Society of Actuaries paper by Dodzi Attimu and Bryon Robidoux, which has since been moved to here.

I summarised their paper as follows at the time:

Focusing on…a paper produced by Dodzi Attimu and Bryon Robidoux for the Society of Actuaries in July 2016 explored the theme of robo actuaries, by which they meant software that can perform the role of an actuary. They went on to elaborate as follows:

Though many actuaries would agree certain tasks can and should be automated, we are talking about more than that here. We mean a software system that can more or less autonomously perform the following activities: develop products, set assumptions, build models based on product and general risk specifications, develop and recommend investment and hedging strategies, generate memos to senior management, etc.

They then went on to define a robo actuarial analyst as:

A system that has limited cognitive abilities but can undertake specialized activities, e.g. perform the heavy lifting in model building (once the specification/configuration is created), perform portfolio optimization, generate reports including narratives (e.g. memos) based on data analysis, etc. When it comes to introducing AI to the actuarial profession, we believe the robo actuarial analyst would constitute the first wave and the robo actuary the second wave.

They estimate that the first wave is 5 to 10 years away and the second 15 to 20 years away. We have been warned.

So 9 years on from their paper, how are things looking? Well the robo actuarial analyst wave certainly seems to be pretty much here, particularly now that large language models like ChatGPT are being increasingly used to generate reports. It suddenly looks a lot less fanciful to assume that the full robo actuary is less than 11 years away.

But now the debate on AI appears to be shifting to an argument between whether we are heading for Vernor Vinge’s “Singularity” where the increasingly capable systems

would not be humankind’s “tool” — any more than humans are the tools of rabbits or robins or chimpanzees

on the one hand, and, on the other, the idea that “it is going to take a long time for us to really use AI properly…, because of how hard it is to regear processes and organizations around new tech”.

In his article on Understanding AI as a social technology, Henry Farrell suggests that neither of these positions allow a proper understanding of the impact AI is likely to have, instead proposing the really interesting idea that we are already part way through a “slow singularity”, which began with the industrial revolution. As he puts it:

Under this understanding, great technological changes and great social changes are inseparable from each other. The reason why implementing normal technology is that so slow is that it requires sometimes profound social and economic transformations, and involves enormous political struggle over which kinds of transformation ought happen, which ought not, and to whose benefit.

This chimes with what I was saying recently about AI possibly not being the best place to look for the next industrial revolution. Farrell plausibly describes the current period using the words of Herbert Simon. As Farrell says: “Human beings have quite limited internal ability to process information, and confront an unpredictable and complex world. Hence, they rely on a variety of external arrangements that do much of their information processing for them.” So Simon says of markets, for instance, which:

appear to conserve information and calculation by assigning decisions to actors who can make them on the basis of information that is available to them locally – that is, without knowing much about the rest of the economy apart from the prices and properties of the goods they are purchasing and the costs of the goods they are producing.

And bureaucracies and business organisations, similarly:

like markets, are vast distributed computers whose decision processes are substantially decentralized. … [although none] of the theories of optimality in resource allocation that are provable for ideal competitive markets can be proved for hierarchy, … this does not mean that real organizations operate inefficiently as compared to real markets. … Uncertainty often persuades social systems to use hierarchy rather than markets in making decisions.

Large language models by this analysis are then just another form of complex information processing, “likely to reshape the ways in which human beings construct shared knowledge and act upon it, with their own particular advantages and disadvantages. However, they act on different kinds of knowledge than markets and hierarchies”. As an Economist article Farrell co-wrote with Cosma Shalizi says:

We now have a technology that does for written and pictured culture what largescale markets do for the economy, what large-scale bureaucracy does for society, and perhaps even comparable with what print once did for language. What happens next?

Some suggestions follow and I strongly recommend you read the whole thing. However, if we return to what I and others were saying in 2016 and 2017, it may be that we were asking the wrong question. Perhaps the big changes of behaviour required of us to operate as economic beings have already happened (the start of the “slow singularity” of the industrial revolution) and the removal of alternatives that required us to spend increasing proportions of our time within and interacting with bureacracies and other large organisations were the logical appendage to that process. These processes are merely becoming more advanced rather than changing fundamentally in form.

And the third part, ie language? What started with the emergence of Late Modern English in the 1800s looks like it is now being accelerated via a new way of complex information processing applied to written, pictured (and I would say also heard) culture.

So the future then becomes something not driven by technology, but by our decisions about which processes we want to allow or even encourage and which we don’t, whether those are market processes, organisational processes or large language processes. We don’t have to have robo actuaries or even robo actuarial analysts, but we do have to make some decisions.

And students entering this arena need to prepare themselves to be participants in those decisions rather than just victims of them. A subject I will be returning to.

Title page vignette of Hard Times by Charles Dickens. Thomas Gradgrind Apprehends His Children Louisa and Tom at the Circus, 1870

It was Fredric Jameson (according to Owen Hatherley in the New Statesman) who first said:

“It seems to be easier for us today to imagine the thoroughgoing deterioration of the earth and of nature than the breakdown of late capitalism”. I was reminded of this by my reading this week.

It all started when I began watching Shifty, Adam Curtis’ latest set of films on iPlayer aiming to convey a sense of shifting power structures and where they might lead. Alongside the startling revelation that The Land of Make Believe by Bucks Fizz was written as an anti-Thatcher protest song, there was a short clip of Eric Hobsbawm talking about all of the words which needed to be invented in the late 18th century and early 19th to allow people to discuss the rise of capitalism and its implications. So I picked up a copy of his The Age of Revolution 1789-1848 to look into this a little further.

The first chapter of Hobsbawm’s introduction from 1962, the year of my birth, expanded on the list:

Words are witnesses which often speak louder than documents. Let us consider a few English words which were invented, or gained their modern meanings, substantially in the period of sixty years with
which this volume deals. They are such words as ‘industry’, ‘industrialist’, ‘factory’, ‘middle class’, ‘working class’, ‘capitalism’ and ‘socialism’. They include ‘aristocracy’ as well as ‘railway’, ‘liberal’ and
‘conservative’ as political terms, ‘nationality’, ‘scientist’ and ‘engineer’, ‘proletariat’ and (economic) ‘crisis’. ‘Utilitarian’ and ‘statistics’, ‘sociology’ and several other names of modern sciences, ‘journalism’ and ‘ideology’, are all coinages or adaptations of this period. So is ‘strike’ and ‘pauperism’.

What is striking about these words is how they frame most of our economic and political discussions still. The term “middle class” originated in 1812. Noone referred to an “industrial revolution” until English and French socialists did in the 1820s, despite what it described having been in progress since at least the 1780s.

Today the founder of the World Economic Forum has coined the phrase “Fourth Industrial Revolution” or 4IR or Industry 4.0 for those who prefer something snappier. Its blurb is positively messianic:

The Fourth Industrial Revolution represents a fundamental change in the way we live, work and relate to one another. It is a new chapter in human development, enabled by extraordinary technology advances commensurate with those of the first, second and third industrial revolutions. These advances are merging the physical, digital and biological worlds in ways that create both huge promise and potential peril. The speed, breadth and depth of this revolution is forcing us to rethink how countries develop, how organisations create value and even what it means to be human. The Fourth Industrial Revolution is about more than just technology-driven change; it is an opportunity to help everyone, including leaders, policy-makers and people from all income groups and nations, to harness converging technologies in order to create an inclusive, human-centred future. The real opportunity is to look beyond technology, and find ways to give the greatest number of people the ability to positively impact their families, organisations and communities.

Note that, despite the slight concession in the last couple of sentences that an industrial revolution is about more then technology-driven change, they are clear that the technology is the main thing. It is also confused: is the future they see one in which “technology advances merge the physical, digital and biological worlds” to such an extent that we have “to rethink” what it “means to be human”? Or are we creating an “inclusive, human-centred future”?

Hobsbawm describes why utilitarianism (” the greatest happiness of the greatest number”) never really took off amongst the newly created middle class, who rejected Hobbes in favour of Locke because “he at least put private property beyond the range of interference and attack as the most basic of ‘natural rights'”, whereas Hobbes would have seen it as just another form of utility. This then led to this natural order of property ownership being woven into the reassuring (for property owners) political economy of Adam Smith and the natural social order arising from “sovereign individuals of a certain psychological constitution pursuing their self-interest in competition with one another”. This was of course the underpinning theory of capitalism.

Hobsbawm then describes the society of Britain in the 1840s in the following terms:

A pietistic protestantism, rigid, self-righteous, unintellectual, obsessed with puritan morality to the point where hypocrisy was its automatic companion, dominated this desolate epoch.

In 1851 access to the professions in Britain was extremely limited, requiring long years of education to support oneself through and opportunities to do so which were rare. There were 16,000 lawyers (not counting judges) but only 1,700 law students. There were 17,000 physicians and surgeons and 3,500 medical students and assistants. The UK population in 1851 was around 27 million. Compare these numbers to the relatively tiny actuarial profession in the UK today, with around 19,000 members overall in the UK.

The only real opening to the professions for many was therefore teaching. In Britain “76,000 men and women in 1851 described themselves as schoolmasters/mistresses or general teachers, not to mention the 20,000 or so governesses, the well-known last resource of penniless educated girls unable or unwilling to earn their living in less respectable ways”.

Admittedly most professions were only just establishing themselves in the 1840s. My own, despite actuarial activity getting off the ground in earnest with Edmund Halley’s demonstration of how the terms of the English Government’s life annuities issue of 1692 were more generous than it realised, did not form the Institute of Actuaries (now part of the Institute and Faculty of Actuaries) until 1848. The Pharmaceutical Society of Great Britain (now the Royal Pharmaceutical Society) was formed in 1841. The Royal College of Veterinary Surgeons was established by royal charter in 1844. The Royal Institute of British Architects (RIBA) was founded in 1834. The Society of Telegraph Engineers, later the Institute of Electrical Engineers (now part of the Institute of Engineering and Technology), was formed in 1871. The Edinburgh Society of Accountants and the Glasgow Institute of Accountants and Actuaries were granted royal charters in the mid 1850s, before England’s various accounting institutes merged into the Institute of Chartered Accountants in England and Wales in 1880.

However “for every man who moved up into the business classes, a greater number necessarily moved down. In the second place economic independence required technical qualifications, attitudes of mind, or financial resources (however modest) which were simply not in the possession of most men and women.” As Hobsbawm goes on to say, it was a system which:

…trod the unvirtuous, the weak, the sinful (i.e. those who neither made money nor controlled their emotional or financial expenditures) into the mud where they so plainly belonged, deserving at best only of their betters’ charity. There was some capitalist economic sense in this. Small entrepreneurs had to plough back much of their profits into the business if they were to become big entrepreneurs. The masses of new proletarians had to be broken into the industrial rhythm of labour by the most draconic labour discipline, or left to rot if they would not accept it. And yet even today the heart contracts at the sight of the landscape constructed by that generation.

This was the landscape upon which the professions alongside much else of our modern world were constructed. The industrial revolution is often presented in a way that suggests that technical innovations were its main driver, but Hobsbawm shows us that this was not so. As he says:

Fortunately few intellectual refinements were necessary to make the Industrial Revolution. Its technical inventions were exceedingly modest, and in no way beyond the scope of intelligent artisans experimenting in their workshops, or of the constructive capacities of carpenters, millwrights and locksmiths: the flying shuttle, the spinning jenny, the mule. Even its scientifically most sophisticated machine, James Watt’s rotary steam-engine (1784), required no more physics than had been available for the best part of a century—the proper theory of steam engines was only developed ex post facto by the Frenchman Carnot in the 1820s—and could build on several generations of practical employment for steam engines, mostly in mines.

What it did require though was the obliteration of alternatives for the vast majority of people to “the industrial rhythm of labour” and a radical reinvention of the language.

These are not easy things to accomplish which is why we cannot easily imagine the breakdown of late capitalism. However if we focus on AI etc as the drivers of the next industrial revolution, we will probably be missing where the action really is.

I have just been reading Adrian Tchaikovsky’s Service Model. I am sure I will think about it often for years to come.

Imagine a world where “Everything was piles. Piles of bricks and shattered lumps of concrete and twisted rods of rebar. Enough fine-ground fragments of glass to make a whole razory beach. Shards of fragmented plastic like tiny blunted knives. A pall of ashen dust. And, to this very throne of entropy, someone had brought more junk.”

This is Earth outside a few remaining enclaves. And all served by robots, millions of robots.

Robots: like our protagonist (although he would firmly resist such a designation) Uncharles, who has been programmed to be a valet, or gentleman’s gentlerobot; or librarians tasked with preserving as much data from destruction or unauthorised editing as possible; or robots preventing truancy from the Conservation Farm Project where some of the few remaining humans are conscripted to reenact human life before robots; or the fix-it robots; or the warrior robots prosecuting endless wars.

Uncharles, after slitting the throat of his human master for no reason that he can discern, travels this landscape with his hard-to-define-and-impossible to-shut-up companion The Wonk, who is very good at getting into places but often not so good at extracting herself. Until they finally arrive in God’s waiting room and take a number.

Along the way The Wonk attempts to get Uncharles to accept that he has been infected with a Protagonist Virus, which has given Uncharles free will. And Uncharles finds his prognosis routines increasingly unhelpful to him as he struggles to square the world he is perambulating with the internal model of it he carries inside him.

The questions that bounce back between our two unauthorised heroes are many and various, but revolve around:

  1. Is there meaning beyond completing your task list or fulfilling the function for which you were programmed?
  2. What is the purpose of a gentleman’s gentlerobot when there are no gentlemen left?
  3. Is the appearance of emotion in some of Uncharles’ actions and communications really just an increasingly desperate attempt to reduce inefficient levels of processing time? Or is the Protagonist Virus an actual thing?

Ultimately the question is: what is it all for? And when they finally arrive in front of God, the question is thrown back at us, the pile of dead humans rotting across the landscape of all our trash.

This got me thinking about a few things in a different way. One of these was AI.

Suppose AI is half as useful as OpenAI and others are telling us it will be. Suppose that we can do all of these tasks in less than half the time. How is all of that extra time going to be distributed? In 1930 Keynes speculated that his grandchildren would only need to work a 15 hour week. And all of the productivity improvements he assumed in doing so have happened. Yes still full-time work remains the aspiration.

There certainly seems to have been a change of attitude from around 1980 onwards, with those who could choose choosing to work longer, for various reasons which economists are still arguing about, and therefore the hours lost were from those who couldn’t choose, as The Resolution Foundation have pointed out. Unfortunately neither their pay, nor their quality of work, have increased sufficiently for those hours to meet their needs.

So, rather than asking where the hours have gone, it probably makes more sense to ask where the money has gone. And I think we all know the answer to that one.

When Uncharles and The Wonk finally get in to see God, God gives an example of a seat designed to stop vagrants sleeping on it as the indication it needed of the kind of society humans wanted. One where the rich wanted not to have to see or think about the poor. Replacing all human contact with eternally indefatigable and keen-to-serve robots was the world that resulted.

Look at us clever humans, constantly dreaming of ways to increase our efficiency, remove inefficient human interaction, or indeed any interaction which cannot be predicted in advance. Uncharles’ seemingly emotional responses, when he rises above the sea of task-queue-clutching robots all around him, are to what he sees as inefficiency. But what should be the goal? Increasing GDP can’t be it, that is just another means. We are currently working extremely hard and using a huge proportion of news and political affairs airtime and focus on turning the English Channel into the seaborne equivalent of the seat where vagrants and/or migrants cannot rest.

So what should be the goal? Because the reason Service Model will stay with me for some time to come is that it shows us what happens if we don’t have one. The means take over. It seems appropriate to leave the last word to a robot.

“Justice is a human-made thing that means what humans wish it to mean and does not exist at all if humans do not make it,” Uncharles says at one point. “I suggest that ‘kind and ordered’ is a better goal.”

Last week I read The Million Pound Bank Note by Mark Twain and Brewster’s Millions by George Barr McCutcheon, from 1893 and 1902 respectively. Both have been made into films several times: the Mark Twain short story was first made into a silent movie by the great Alexander Korda in 1916, although the best known adaptations were the one starring Gregory Peck in 1954 and Trading Places (starring Eddie Murphy) in 1983 (which included elements of both The Million Pound Bank Note and Mark Twain’s novel The Prince and the Pauper); Cecil B DeMille was the first to attempt a film adaptation of Brewster’s Millions (from the earlier play) in 1914, with the best known adaptation being Walter Hill’s 1985 movie starring Richard Pryor (movie poster shown above).

Both stories were written before the First World War and it is interesting to see when each has been revived with new adaptations. In particular, although an early attempt was made to film Twain’s story, noone attempted it again until after the second world war, whereas there was a new adaptation of Brewster during the very interesting period between 1920 and 1922 when the first international financial conferences were being held in Brussels and Genoa to establish an international consensus for policies where “individuals had to work harder, consume less, expect less from the government as a social actor, and renounce any form of labour action that would impede the flow of production.” The aim was to return to a pre World War I economic orthodoxy and therefore remove what would be very painful economic measures for most people from the political sphere and into the sphere of “economic science”. In other words, it was a time when the political elite were trying to change the rules of the game.

This may be because Twain’s story, about a man who is given a million pound note and is feted by everyone he meets as a consequence and never has to spend it, winning a bet between the two men who gave him it as a consequence, was seen as a rather slight tale. Interestingly an American TV adaptation and the Gregory Peck film a few years later came out around the time when the Bank of England actually first issued such notes (called Giants) in 1948, which also relied on the power of people knowing they were there rather than ever having to use them.

The rules of the game certainly vary considerably across the Brewster adaptations: DeMille in 1914 was very respectful of the original but by 1921 the $7 million dollars had shrunk to $4 million. By 1926 in Miss Brewster’s Millions, Polly Brewster must spend $1 million dollars in 30 days to inherit $5 million. This was the point where Twenty20 fortune dissipation appears to have supplanted the Test Match variety. In 1935 a British version had Brewster needing to spend £500,000 in 6 months to inherit £6 million. In 1945 Brewster must spend $1 million dollars within 60 days to inherit $7 million. By 1954 the first Telugu adaptation has him spending ₹1 lakh in 30 days which, by 1985, has inflated to ₹25 lakh.

Later in 1985, the Richard Pryor film requires Brewster to spend $30 million within 30 days to inherit $300 million, with the tweak that he is given the option to take $1 million upfront, which for the sake of the movie he doesn’t. There have since been five further adaptations reflecting the globalisation of the ideas in the story (three from India, one from Brazil and one from China) before the sequel to the Richard Pryor film last year.

What is striking about both stories is how, although supposedly about financial transactions, albeit of a rather unusual kind, they are in fact all about how people behave around the display of money. In Twain’s tale, Henry Adams is transformed from being perceived as a beggar to being assumed to be an eccentric millionaire as a result of producing the note.

In the Brewster story, Monty Brewster has to spend the million dollars he has been left by his grandfather within a year so that he has no assets left in order to claim the seven million dollars left to him by an uncle on this condition. The original story explains the strange condition (something the Richard Pryor film doesn’t do as far as I can recall) as being due to his uncle hating his grandfather so much (due to his grandfather’s refusal to accept his uncle’s sister’s marriage). The uncle therefore wanted “to preclude any possible chance of the mingling of his fortune with the smallest portion of Edwin P Brewster’s”.

The problem for Monty is that he is not allowed to tell anyone of the condition, and therefore it is the difficulties the behaviour he then has to adopt causes him with New York high society that is the subject of the story. There are dinners and cruises and carnivals and holiday homes all bankrolled by Brewster for himself and whoever will journey with him, during which he falls in love and then out of love with one woman and then falls in love with the woman he had grown up alongside. Things normally regarded as good luck, like winning a bet or making a profitable investment, become bad luck for Monty.

By the end of the year, and very close to spending the whole million with nothing to show for it, he returns from a transatlantic cruise where he had been kidnapped by his friends at one stage to prevent him sailing to South Africa, to find himself spurned by the very society he had tried so hard to cultivate:

With the condemnation of his friends ringing in his troubled brain, with the sneers of acquaintances to distress his pride, with the jibes of the comic papers to torture him remorselessly, Brewster was fast becoming the most miserable man in New York. Friends of former days gave him the cut direct, clubmen ignored him or scorned him openly, women chilled him with the iciness of unspoken reproof, and all the world was hung with shadows. The doggedness of despair kept him up, but the strain that pulled down on him was so relentless that the struggle was losing its equality. He had not expected such a home-coming.

After a bit of a scare that the mysterious telegram correspondent Swearengen Jones, who held the 7 million and was assessing his performance, had disappeared, everything comes right for Monty in the end and he marries Peggy who had agreed to do so even when she thought him penniless.

And we are left to assume that everything in the previous paragraph is reversed in the same way as in The Million Pound Bank Note on being able to display wealth once more.

There is a lot of plot in the Brewster story in particular, a lot of which does not amount to much but keeps Monty Brewster feverishly busy throughout.

These two in many ways ridiculous stories, written as they are just as economics is trying to establish itself as a science and ultimately the discipline that shapes our current societies, I think reveal quite a lot about the nature of money amongst people who have a lot of it. Neither Henry nor Monty (apart from an opening twenty four hours for Henry and a scene revolving around a pear in the gutter after a night sleeping rough) experience hunger or the absence of anywhere to sleep at any point. Their concern for money seems to be entirely about social position, the respect of who they regard as their peers and being able to marry the women they have set their hearts on. In other words, money is not about money for these protagonists, it is about status.

It seems to me that almost the entire edifice that we call economics now has possibly been constructed by people in this position. Is this why money creation is represented in so many economic models via constructions clearly at odds with the actual activities of banks (one of many pieces by Steve Keen demonstrating this problem here), and why ideas such as loanable funds and the money multiplier, persist in economics education? Perhaps the original architects of these economic theories did not need money to live, as much as they needed the respect of who they saw as their peers.

David Graeber often used to point out how much more time people at the bottom of society spent thinking about people at the top than the people at the top spent thinking about them. Is this at the heart of the problem?

Of course we do still have some social mobility. A relatively small number of people from poor backgrounds can still enter influential professions. Some of them have even become economists! Of course the very process of becoming a professional is designed to distance you from your origins: years of immersion in a very academic discipline, requiring total concentration and dedication to internalising enough of the professional “truths” learnt so as to be assessed as qualified to practise, normally while engaged in highly intensive work alongside more senior people for who these truths have already been securely internalised.

And then once there you are in the Monty Brewster situation, so insecure about your position within this new society you have joined that you will do whatever it takes to maintain it. You are “upwardly mobile”. Your families are proud that you are “getting on” and doing better, certainly in terms of income and professional respect, than they did. There is no serious challenge to this path other than its difficulty, which again creates a massive sunk cost in your mind when considering alternatives. And it is a path which is invariably described as upward.

Meanwhile the societies we have constructed around these economic edifices also have a lot of plot, a lot of which does not amount to very much but it keeps us all feverishly busy most of the time.

According to Pat McFadden, a Government minister, there will be “financial consequences” to the decision to modify the planned cuts to disability and health-related benefits in order to win the vote on the welfare bill. There certainly will be for people receiving these benefits.

The changes to the bill in order to get it voted through will still:

  • reduce the health element of the Universal Credit for new claims from £97 to £50 per week from April 2026 and restrict payment to claimants over the age of 22, although now the benefit will continue to increase at least in line with inflation;
  • possibly re-introduce some of the restrictions to eligibility for personal independence payments following a review.

But that does not appear to be what McFadden was talking about, as he went on to list a number of taxes the Government would not raise. Instead the financial consequences comment sounded more like the empty threat of the playground bully when his victim has unexpectedly given him a bloody nose and he is trying not to lose face. Because nearly all of the newspaper coverage of this event appears to have been focused on this reputational aspect rather than on the fiscal significance of the changes:

In my last post, I referred to Harvey Whitehouse’s excellent Inheritance – the Evolutionary Origins of the Modern World, which included this definition of gossip:

When we lived in small communities, in which everybody knew everybody else, news consisted mainly of socially strategic information about who was hoarding wealth, who was telling lies, who was sleeping with whom, who was stealing, who was free-riding, and so on. In most of these newsworthy stories, there would be transgressors and victims, and news purveyors and consumers would be very sensitive to the reputational consequences of this information. The common term for this is gossip.

So by focusing on the reputational consequences of a welfare bill in the House of Commons, these newspapers are preferring to present a story which affects the livelihoods of up to a million people as if it were gossip. This approach is justified by the media as something the public are interested in and therefore something we will buy. Our bias towards stories about reputational consequences, even of people we do not know or are ever likely to meet, is therefore used against us and the world gets a little less understandable with every gossipy take of a more complex story. This has other implications (or perhaps what McFadden would call “consequences”): the rest of the day’s news seemed to revolve around whether someone had been mean to the Chancellor of the Exchequer and made her cry.

And the actual financial consequences? Well, the BBC made much about the impact of the Chancellor’s tears on the bond and currency markets. Morningstar went further and claimed that investors had saved the Chancellor’s job by forcing the Prime Minister to support her very publicly after failing to do so initially. The unspoken assumption is that the markets control the economy and all we can do is have gossipy conversations in our impotence about whether Rachel, Ed, Wes, Liz or Angela are up or down this week.

This is self-fulfilling: we can be as powerless as we decide to be. Or we can realise that the way we run our country and society is up to us. The £5 billion all of this is supposedly all about could be raised in any number of ways: slowing down the quantitative tightening programme the Bank of England is set on, a policy of selling bonds before maturity not adopted by the European Central Bank or the Federal Reserve in the United States, or any of a number of suggestions made by Richard Murphy which would have been focused on the top 10% of earners. Instead we went after the disabled first. Noone forced us to. We did it to ourselves.

Perhaps we should all be a little tearful about that.

https://parliament.assetbank-server.com/assetbank-parliament/images/assetbox/b26cd8f5-538e-4409-b033-f1f02aea6821/assetbox.html

Milan Kundera wrote his The Book of Laughter and Forgetting in 1979, a few years after moving to France and the same year he had his Czech citizenship revoked. His books had all been banned in Czechoslovakia in 1968, as most of them poked fun at the regime in one way or the other. The Book of Laughter and Forgetting was no exception, focusing, via seven stories, on what we choose to forget in history, politics and our own lives. One of the themes is a word which is difficult to translate into English: litost.

Litost seems to mean an emotional state of feeling of being on your own suddenly brought face to face with how obvious your own hopelessness is. Or something to that effect. Kundera explored several aspects of litost at length in the novel. However, for all the difficulties of describing it exactly, litost feels like a useful word for our times, our politics and our economics.

I want to focus on two specific examples of forgetting and the sudden incidents of litost which have brought them back into focus.

The first, although not chronologically, would be the pandemic. There are several articles around suddenly about the lessons we have not learnt from the pandemic, to mark the fifth anniversary of the first lockdown. Christina Pagel, backed up by module 1 of the Covid-19 Inquiry, reckons:

Preventing future lockdowns requires planning, preparation, investment in public health infrastructure, and investment in testing, virology and medical research

She takes issue with some of the commentary as follows:

But the tenor of reporting and public opinion seems to be that “lockdowns were terrible and so we must not have lockdowns again”. This is the wrong lesson. Lockdowns are terrible but so are unchecked deadly pandemics. The question should be “lockdowns were terrible, so how can we prevent the spread of a new pandemic so we never need one again?”.

However the stampede to get back to “normal” has mitigated against investing in infrastructure and led to a massive reduction in testing and reporting, and the Covid-19 Inquiry has given the government cover (all questions can just be responded to by saying that the Covid Inquiry is still looking at what happened) to actively forget it as quickly as possible. Meanwhile the final module of the Covid-19 Inquiry is not due to conclude until early 2026, which one must hope is before the next pandemic hits. For which, as the former Chief Scientific Adviser and other leading experts have said, we are not remotely prepared, and certainly no better prepared than we were in 2020.

It is tempting to think that this is the first major recent instance involving the forgetting of a crisis to the extent that its repetition would be just as devastating the second time. Which is perhaps a sign of how complete our collective amnesia about 2008 has become.

Make no mistake, 2008 was a complete meltdown of the core of our financial system. People I know who were working in banks at the time described how even the most experienced people around them had no idea what to do. Alistair Darling, Chancellor of the Exchequer at the time, claimed we were hours away from a “breakdown in law and order”.

According to the Commons Library briefing note from October 2018, the Office for Budget Responsibility (OBR) estimates that, as at the end of January 2018, the interventions had cost the public £23 billion overall. The net balance is the result of a £27 billion loss on the RBS rescue, offset by some net gains on other schemes. Total support in cash and guarantees added up to almost £1.2 trillion, including the nationalisation of Northern Rock (purchased by Virgin Money, which has since been acquired by the Nationwide Building Society) and the Bradford & Bingley (sold to Santander) and major stakes in RBS (now NatWest) and Lloyds. Peak government ownership in these banks is shown below:

If you read the Bank of England wacky timeline 10 years on from 2018, you will see a lot about how prepared they are to fight the last war again. As a result of this, cover has been given to actively forget 2008 as quickly as possible.

Except now various people are arguing that the risks of the next financial crisis are increasing again. The FT reported in January on the IMF’s warnings (from their Global Financial Stability Report from April 2024) about the rise in private credit bringing systemic risks.

Meanwhile Steve Keen (one of the very few who actually predicted the 2008 crisis) in his latest work Money and Macroeconomics from First Principles, for Elon Musk and Other Engineers has a whole chapter devoted to triggering crises by reducing government debt, which makes the following point:

A serious crisis, triggered by a private debt bubble and crash, has followed every sustained attempt to reduce government debt. This can be seen by comparing data on government and private debt back to 1834.

(By the way, Steve Keen is running a webinar for the Institute and Faculty of Actuaries entitled Why actuaries need a new economics on Friday 4 April which I thoroughly recommend if you are interested)

Which brings us to the Spring Statement, which was about (yes, you’ve guessed it!) reducing government debt (or the new formulation of this “increasing OBR headroom”) and boosting GDP growth. Watching the Chief Secretary to the Treasury, Darren Jones, and Paul Johnson from the IFS nodding along together in the BBC interviews immediately afterwards, you realised how the idea of allowing the OBR to set policy has taken hold. Johnson’s only complaint seemed to be that they appeared to be targeting headroom to the decimal point over other considerations.

I have already written about the insanity of making OBR forecasts the source of your hard spending limits in government. The backdrop to this Statement was already bad enough. As Citizens Advice have said, people’s financial resilience has never been lower.

But aside from the callousness of it all, it does not even make sense economically. The OBR have rewarded the government for sticking to them so closely by halving their GDP growth projections and, in the absence of any new taxes, it seems as if disabled people are being expected to do a lot of the heavy lifting by 2029-30:

Part of this is predicated on throwing 400,000 people off Personal Independence Payments (PIPs) by 2029-30. According to the FT:

About 250,000 people, including 50,000 children, will be pushed into relative poverty by the cuts, according to a government impact assessment.

As Roy Lilley says:

We are left standing. Abandoned, to watch the idiocy of what’s lost… the security, human dignity and wellbeing of our fellow man, woman and their family… everything that matters.

As an exercise in fighting the last war, or, according to Steve Keen, the wars successive governments have been fighting since 1834, it takes some beating. It was litost on steroids for millions of people.

So what does the government think these people are going to fill the income gap with? It will be private debt of course. And for those in poverty, the terms are not good (eg New Horizons has a representative APR of 49% with rates between 9.3% APR and maximum 1,721% APR).

And for those who can currently afford a mortgage (from page 47 of the OBR report):

Average interest rates on the stock of mortgages are expected to rise from around 3.7 per cent in 2024 to a peak of 4.7 per cent in 2028, then stay around that level until the end of the forecast. The high proportion of fixed-rate mortgages (around 85 per cent) means increases in Bank Rate feed through slowly to the stock of mortgages. The Bank of England estimates around one-third of those on fixed rate mortgages have not refixed since rates started to rise in mid-2021, so the full impact of higher interest rates has not yet been passed on.

So, even before considering the future tax increases the FT appears to be expecting, the levels of private debt look like they will shoot up very quickly. And we all know (excluding the government it seems) where that leads…