So, as a way of signing off until next year, I thought I would write something short about length.
My first job was with De La Rue and, specifically, within their print division which was still named after the original founder, sometime straw hat and playing card manufacturer and Guernseyman, Thomas De La Rue. Or TDLR for short. As a result of which I can never see tl;dr written anywhere (and it does seem to be everywhere on social media these days as the amount of written material to work our way through becomes ever more overwhelming) without thinking of my first years of employment, which momentary distraction, I assume, is the complete opposite of what tl;dr is often designed for, which is to help you understand something you don’t have time to read.
It feels like there is a shift happening in the etiquette of social media on this. Only recently I saw a response to a piece which was not particularly long which started “Don’t have time to read but probably agree as follows…”. This seemed rude to me but perhaps I am being old-fashioned about this. Because there are a lot of writers now where I am regularly skimming them or only reading the first halves of their articles. Writers who often have a really good point, but appear to want to say it in as many different ways as possible, nailing every single example imaginable for completeness. But really, who values completeness? I think what we are looking for is careful selection from someone who knows something we don’t about the terrain and who can therefore guide us through at least a swamp or two before leaving us to the next writer. If we wanted completeness, we could stumble into every sink hole for ourselves.
I did a mini review for Service Model by Adrian Tchaikovsky as a blog post recently which got the following response from the author which I was very chuffed about:
Fascinating (and spot on) little essay on Service Model and how it relates to the real world.
My wife (the one who calls me Swampy Dave sometimes) said “aren’t you a little insulted by the reference to a ‘little essay’?” and I realised that I wasn’t at all. Quite the reverse. I had managed to say something which had a point to it and which others could understand and all within 850 words. If I had to encapsulate why I blog in a sentence that would be it.
Returning to Tchaikovsky, he arranges his books on his website between novels, novellas, shorts and free. People appear to differ about how long each form should be, but Tchaikovsky described a novella as having a beginning and an end but no middle (section 6 of the interview here), which tended to pursue one idea to its logical conclusion. A short story took him a week to write. Everything else is a novel.
Definitions vary, this source defined the different forms as follows:
Flash fiction: under 1000 words (although a lot of competitions stipulate maximum 500 words)
Short stories: 3,500-7,500 words
Novelette (yes I know! I hadn’t heard of this before either!): 7,500-17,000 words
Novella: 17,000-40,000 words
Novel: 40,000 words plus
And then this other source helpfully listed the word count for 175 famous books.
Growing up I regarded War and Peace (finally slogged through it in the late 80s) as the ultimate long book but, at 561,304 words, it is not even close to being the longest, which appears to be Proust’s In Search of Lost Time or A la recherche du temps perdu (1,267,069 in English or slightly fewer in the original French), although it was published in 7 volumes originally. Meanwhile HG Wells’ The Time Machine, Orwell’s Animal Farm and Steinbeck’s Of Mice and Men are officially defined as novellas, despite being, in the view of many, some of the most important books ever written.
I am quite a slow reader, which is perhaps why the question of book length seems to be bothering me so much. I have therefore decided to try and restrict myself to novellas and shorter fiction for 2026 (although the non-fiction is likely to be as long as ever, until the concept of non-fictionella is embraced if ever!) in order to read a wider range of writers. Might also mean there are more book reviews here next year!
So this is my 42nd blog post of the year and the 8th where I have referenced Cory Doctorow. Thought it was more to be honest, so influential has he been on my thought, particularly as I have delved deeper into what, how and why the AI Rush is proceeding and what it means for the people exiting universities over the next few years.
Yesterday Cory published a reminder of his book reviews this year. He is an amazing book reviewer. There are 24 on the list this year, and I want to read every one of them on the strength of his reviews alone.
I would like to repay the compliment by reviewing his latest book: Enshittification (the other publication this year – Picks and Shovels – is also well worth your time by the way). Can’t believe this wasn’t the word of the year rather than rage bait, as it explains considerably more about the times we are living in.
I have been a fan of Doctorow for a couple of years now. I had had Walkaway sat on my shelves for a few years before I read it and was immediately enthralled by his tale of a post scarcity future which had still somehow descended into an inter-generational power struggle hellscape. I moved on to the Little Brother books, now being reenacted by Trump with his ICE force in one major US city after another. Followed those up with The Lost Cause, where the teenagers try desperately to bridge the gap across the generations with MAGA people, with tragic results along the way but a grim determination at the end “the surest way to lose is to stop running”. From there I migrated to the Marty Hench thrillers, his non-fiction The Internet Con (which details the argument for interoperability, ie the ability of any platform to interact with another) and his short fiction (I loved Radicalised, not just for the grimly prophetic Radicalised novella in the collection, but also the gleeful insanity of Unauthorised Bread). I highly recommend them all.
I came to Enshittification after reading his Pluralistic blog most days for the last year and a half, so was initially disappointed to find very little new as I started working my way through it. However what the first two parts – The Natural History and The Pathology – are is a patient explanation of the concept of enshittification and how it operates assuming no previous engagement with the term, all in one place.
Enshittifcation, as defined by Cory Doctorow, proceeds as follows:
First, platforms are good to their users.
Then they abuse their users to make things better for their business customers.
Next, they abuse those business customers to claw back all the value for themselves.
Finally, they have become a giant pile of shit.
So far, so familiar. But then I got to Part Three, explaining The Epidemiology of enshittification, and the book took off for me. The erosion of antitrust (what we would call competition) law since Carter. “Antitrust’s Vietnam” (how Robert Bork described the 12 years IBM fought and outspent the US Department of Justice year after year defending their monopolisation case) until Reagan became President. How this led to an opening to develop the operating system for IBM when it entered the personal computer market. How this led to Microsoft, etc. Then how the death of competition also killed Big Tech regulation ( regulating a competitive market which acts against collusion is much easier than regulating one with a small number of big players which absolutely will collude with each other).
And then we get to my favourite chapter of the book “Reverse-Centaurs and Chickenisation”. Any regular reader of this blog will already be familiar with what a reverse centaur is, although Cory has developed a snappy definition in the process of writing this book:
A reverse-centaur is a machine that uses a human to accomplish more than the machine could manage on its own.
And if that isn’t chilling enough for you, the description of the practices of poultry packers and how they control the lives of the nominally self-employed chicken farmers of the US, and how these have now been exported to companies like Amazon and Arise and Uber, should certainly be. The prankster who collected up the bottled piss of the Amazon drivers who weren’t allowed a loo break and resold it on Amazon‘s own platform as “a bitter lemon drink” called Release Energy, which Amazon then recategorised as a beverage without asking for any documentation to prove it was fit to drink and then, when it was so successful it topped their sales chart, rang the prankster up to discuss using Amazon for shipping and fulfillment – this was a rare moment of hilarity in a generally sordid tale of utter exploitation. My favourite bit is when he gets on to the production of his own digital rights management (DRM) free audio versions of his own books.
The central point of the DRM issue is, as Cory puts it, “how perverse DMCA 1201 is”:
If I, as the author, narrator, and investor in an audiobook, allow Amazon to sell you that book and later want to provide you with a tool so you can take your book to a rival platform, I will be committing a felony punishable by a five-year prison sentence and a $500,000 fine.
To put this in perspective: If you were to simply locate this book on a pirate torrent site and download it without paying for it, your penalty under copyright law is substantially less punitive than the penalty I would face for helping you remove the audiobook I made from Amazon’s walled garden. What’s more, if you were to visit a truck stop and shoplift my audiobook on CD from a spinner rack, you would face a significantly lighter penalty for stealing a physical item than I would for providing you with the means to take a copyrighted work that I created and financed out of the Amazon ecosystem. Finally, if you were to hijack the truck that delivers that CD to the truck stop and steal an entire fifty-three-foot trailer full of audiobooks, you would likely face a shorter prison sentence than I would for helping you break the DRM on a title I own.
DMCA1201 is the big break on interoperability. It is the reason, if you have a HP printer, you have to pay $10,000 a gallon for ink or risk committing a criminal offence by “circumventing an access control” (which is the software HP have installed on their printers to stop you using anyone else’s printer cartridges). And the reason for the increasing insistence on computer chips in everything from toasters (see “Unauthorised Bread” for where this could lead) to wheelchairs – so that using them in ways the manufacturer and its shareholders disapprove of becomes illegal.
The one last bastion against enshittification by Big Tech was the tech workers themselves. Then the US tech sector laid off 260,000 workers in 2023 and a further 100,000 in the first half of 2024.
In case you are feeling a little depressed (and hopefully very angry too) at this stage, Part 4 is called The Cure. This details the four forces that can discipline Big Tech and how they can all be revived, namely:
Competition
Regulation
Interoperability
Tech worker power
As Cory concludes the book:
Martin Luther King Jr once said, “It may be true that the law cannot make a man love me, but it can stop him lynching me, and I think that’s pretty important, also.”
And it may be true that the law can’t force corporate sociopaths to conceive of you as a human being entitled to dignity and fair treatment, and not just an ambulatory wallet, a supply of gut bacteria for the immortal colony organism that is a limited liability corporation.
But it can make that exec fear you enough to treat you fairly and afford you dignity, even if he doesn’t think you deserve it.
And I think that’s pretty important.
I was reading Enshittification on the train journey back from Hereford after visiting the Hay Winter Weekend, where I had listened to, amongst others, the oh-I’m-totally-not-working-for-Meta-any-more-but-somehow-haven’t-got-a-single-critical-word-to-say-about-them former Deputy Prime Minister Nick Clegg. While I was on the train, a man across the aisle had taken the decision to conduct a conversation with first Google and then Apple on speaker phone. A particular highlight was him just shouting “no, no, no!” at Google‘s bot trying to give him options. He had already been to the Vodaphone shop that morning and was on his way to an appointment which he couldn’t get at the Apple Store on New Street in Birmingham. He spotted the title of my book and, when I told him what enshittification meant, and how it might make some sense out of the predicament he found himself in, took a photo of the cover.
My feeling is that enshittification goes beyond Big Tech. It is the defining industrial battle of our times. We shouldn’t primarily worry about whether it is coming from the private or the public sector, as enshittification can happen in both places: from hollowing out justice to “paying more for medicines… at the exact moment we can’t afford to pay enough doctors to prescribe them” in the public sector, where we already reside within the Government’s walled garden, to all of the outrages mentioned above and more in the private sector.
The PFI local health hubs set out in last week’s budget take us back to perhaps the ultimate enshittificatory contracts the Government ever entered into, certainly before the pandemic. The Government got locked into 40 year contracts, took all the risk, and all the profit was privatised. The turbo-charging of the original PFI came out of the Blair-Brown government’s mania for keeping capital spending off the balance sheet in defence of Gordon Brown’s “Golden Rule” which has now been replaced by Rachel Reeves’ equally enshittifying fiscal rules. All the profits (or, increasingly, rents, as Doctorow discusses in the chapter on Varoufakis’ concept of Technofeudalism) from turning the offer to shit always seem to end up in the private sector. The battle is against enshittification from both private and, by proxy, via public monopolies.
Enshittification is, ultimately, a positive and empowering book which I strongly recommend you buy, avoiding Amazon if you can. We can have a better internet than this. We can strike a better deal with Big Tech over how we run our lives. But the surest way to lose is to stop running.
And next time a dead-eyed Amazon driver turns up at your door, be nice, they are probably having a worse day than you are.
In 2017, I was rather excitedly reporting about ideas which were new to me at the time regarding how technology or, as Richard and Daniel Susskind referred to it in The Future of the Professions, “increasingly capable machines” were going to affect professional work. I concluded that piece as follows:
The actuarial profession and the higher education sector therefore need each other. We need to develop actuaries of the future coming into your firms to have:
great team working skills
highly developed presentation skills, both in writing and in speech
strong IT skills
clarity about why they are there and the desire to use their skills to solve problems
All within a system which is possible to regulate in a meaningful way. Developing such people for the actuarial profession will need to be a priority in the next few years.
While all of those things are clearly still needed, it is becoming increasingly clear to me now that they will not be enough to secure a job as industry leaders double down.
And perhaps even worse than the threat of not getting a job immediately following graduation is the threat of becoming a reverse-centaur. As Cory Doctorow explains the term:
A centaur is a human being who is assisted by a machine that does some onerous task (like transcribing 40 hours of podcasts). A reverse-centaur is a machine that is assisted by a human being, who is expected to work at the machine’s pace.
We have known about reverse-centaurs since at least Charlie Chaplin’s Modern Times in 1936.
By Charlie Chaplin – YouTube, Public Domain, https://commons.wikimedia.org/w/index.php?curid=68516472
Think Amazon driver or worker in a fulfillment centre, sure, but now also think of highly competitive and well-paid but still ultimately human-in-the-loop kinds of roles being responsible for AI systems designed to produce output where errors are hard to spot and therefore to stop. In the latter role you are the human scapegoat, in the phrasing of Dan Davies, “an accountability sink” or in that of Madeleine Clare Elish, a “moral crumple zone” all rolled into one. This is not where you want to be as an early career professional.
So how to avoid this outcome? Well obviously if you have other options to roles where a reverse-centaur situation is unavoidable you should take them. Questions to ask at interview to identify whether the role is irretrievably reverse-centauresque would be of the following sort:
How big a team would I be working in? (This might not identify a reverse-centaur role on its own: you might be one of a bank of reverse-centaurs all working in parallel and identified “as a team” while in reality having little interaction with each other).
What would a typical day be in the role? This should smoke it out unless the smokescreen they put up obscures it. If you don’t understand the first answer, follow up to get specifics.
Who would I report to? Get to meet them if possible. Establish whether they are technical expert in the field you will be working in. If they aren’t, that means you are!
Speak to someone who has previously held the role if possible. Although bear in mind that, if it is a true reverse-centaur role and their progress to an actual centaur role is contingent on you taking this one, they may not be completely forthcoming about all of the details.
If you have been successful in a highly competitive recruitment process, you may have a little bit of leverage before you sign the contract, so if there are aspects which you think still need clarifying, then that is the time to do so. If you recognise some reverse-centauresque elements from your questioning above, but you think the company may be amenable, then negotiate. Once you are in, you will understand a lot more about the nature of the role of course, but without threatening to leave (which is as damaging to you as an early career professional as it is to them) you may have limited negotiation options at that stage.
In order to do this successfully, self knowledge will be key. It is that point from 2017:
clarity about why they are there and the desire to use their skills to solve problems
To that word skills I would now add “capabilities” in the sense used in a wonderful essay on this subject by Carlo Iacono called Teach Judgement, Not Prompts.
You still need the skills. So, for example, if you are going into roles where AI systems are producing code, you need to have sufficiently good coding skills yourself to create a programme to check code written by the AI system. If the AI system is producing communications, your own communication skills need to go beyond producing work that communicates to an audience effectively to the next level where you understand what it is about your own communication that achieves that, what is necessary, what is unnecessary, what gets in the way of effective communication, ie all of the things that the AI system is likely to get wrong. Then you have a template against which to assess the output from an AI system, and for designing better prompts.
However specific skills and tools come and go, so you need to develop something more durable alongside them. Carlo has set out four “capabilities” as follows:
Epistemic rigour, which is being very disciplined about challenging what we actually know in any given situation. You need to be able to spot when AI output is over-confident given the evidence, or when a correlation is presented as causation. What my tutors used to refer to as “hand waving”.
Synthesis is about integrating different perspectives into an overall understanding. Making connections between seemingly unrelated areas is something AI systems are generally less good at than analysis.
Judgement is knowing what to do in a new situation, beyond obvious precedent. You get to develop judgement by making decisions under uncertainty, receiving feedback, and refining your internal models.
Cognitive sovereignty is all about maintaining your independence of thought when considering AI-generated content. Knowing when to accept AI outputs and when not to.
All of these capabilities can be developed with reflective practice, getting feedback and refining your approach. As Carlo says:
These capabilities don’t just help someone work with AI. They make someone worth augmenting in the first place.
In other words, if you can demonstrate these capabilities, companies who themselves are dealing with huge uncertainty about how much value they are getting from their AI systems and what they can safely be used for will find you an attractive and reassuring hire. Then you will be the centaur, using the increasingly capable systems to improve your own and their productivity while remaining in overall control of the process, rather than a reverse-centaur for which none of that is true.
One sure sign that you are straying into reverse-centaur territory is when a disproportionate amount of your time is spent on pattern recognition (eg basing an email/piece of coding/valuation report on an earlier email/piece of coding/valuation report dealing with a similar problem). That approach was always predicated on being able to interact with a more experienced human who understood what was involved in the task at some peer review stage. But it falls apart when there is no human to discuss the earlier piece of work with, because the human no longer works there, or a human didn’t produce the earlier piece of work. The fake it until you make it approach is not going to work in environments like these where you are more likely to fake it until you break it. And pattern recognition is something an AI system will always be able to do much better and faster than you.
Instead, question everything using the capabilities you have developed. If you are going to be put into potentially compromising situations in terms of the responsibilities you are implicitly taking on, the decisions needing to be made and the limitations of the available knowledge and assumptions on which those decisions will need to be based, then this needs to be made explicit, to yourself and the people you are working with. Clarity will help the company which is trying to use these new tools in a responsible way as much as it helps you. Learning is going to be happening for them as much as it is for you here in this new landscape.
And if the company doesn’t want to have these discussions or allow you to hamper the “efficiency” of their processes by trying to regulate them effectively? Then you should leave as soon as you possibly can professionally and certainly before you become their moral crumple zone. No job is worth the loss of your professional reputation at the start of your career – these are the risks companies used to protect their senior people of the future from, and companies that are not doing this are clearly not thinking about the future at all. Which is likely to mean that they won’t have one.
To return to Cory Doctorow:
Science fiction’s superpower isn’t thinking up new technologies – it’s thinking up new social arrangements for technology. What the gadget does is nowhere near as important as who the gadget does it for and who it does it to.
You are going to have to be the generation who works these things out first for these new AI tools. And you will be reshaping the industrial landscape for future generations by doing so.
And the job of the university and further education sectors will increasingly be to equip you with both the skills and the capabilities to manage this process, whatever your course title.
Just type “mixed economy graphic” into Google and you will get a lot of diagrams like this one – note that they normally have to pick out the United States for special mention. Notice the big gap between those countries – North Korea, Cuba, China and Russia – and us. It is a political statement masquerading as an economic one.
This same line is used to describe our political options. The Political Compass added an authoritarian/libertarian axis in their 2024 election manifesto analysis but the line from left to right (described as the economic scale) is still there:
So here we are on our political and economic spectrum, where tiny movements between the very clustered Reform, Conservative, Labour and Liberal Democrat positions fill our newspapers and social media comment. The Greens and, presumably if it ever gets off the ground, Your Party are seen as so far away from the cluster that they often get left out of our political discourse. It is an incredibly narrow perspective and we wonder why we are stuck on so many major societal problems.
This is where we have ended up following the “slow singularity” of the Industrial Revolution I talked about in my last post. Our politics coalesced into one gymnasts’ beam, supported by the hastily constructed Late Modern English fashioned for this purpose in the 1800s, along which we have all been dancing ever since, between the market information processors at the “right” end and the bureacratic information processors at the “left” end.
So what does it mean for this arrangement if we suddenly introduce another axis of information processing, ie the large language AI models. I am imagining something like this:
What will this mean for how countries see their economic organisation? What will it mean for our politics?
In 1884, the English theologian, Anglican priest and schoolmaster Edwin Abbott Abbott published a satirical science fiction novella called Flatland: A Romance of Many Dimensions. Abbott’s satire was about the rigidity of Victorian society, depicted as a two-dimensional world inhabited by geometric figures: women are line segments, while men are polygons with various numbers of sides. We are told the story from the viewpoint of a square, which denotes a gentleman or professional. In this world three-dimensional shapes are clearly incomprehensible, with every attempt to introduce new ideas from this extra dimension considered dangerous. Flatland is not prepared to receive “revelations from another world”, as it describes anything existing in the third dimension, which is invisible to them.
The book was not particularly well received and fell into obscurity until it was embraced by mathematicians and physicists in the early 20th century as the concept of spacetime was being developed by Poincaré, Einstein and Minkowski amongst others. And what now looks like a prophetic analysis of the limitations of the gymnasts’ beam economic and political model of the slow singularity has continued to not catch on at all.
However, much as with Brewster’s Millions, the incidence of film adaptations of Flatland give some indication of when it has come back as an idea to some extent. This tells us that it wasn’t until 1965 until someone thought it was a good idea to make a movie of Flatland and then noone else attempted it until an Italian stop-motion film in 1982. There were then two attempts in 2007, which I can’t help but think of as a comment on the developing financial crisis at the time, and a sequel based on Bolland : een roman van gekromde ruimten en uitdijend heelal (which translates as: Sphereland: A Fantasy About Curved Spaces and an Expanding Universe), a 1957 sequel to Flatland in Dutch (which didn’t get translated into English until 1965 when the first animated film came out) by Dionys Burger, in 2012.
So here we are, with a new approach to processing information and language to sit alongside the established processors of the last 200 years or more. Will it perhaps finally be time to abandon Flatland? And if we do, will it solve any of our problems or just create new ones?
I have just been reading Adrian Tchaikovsky’s Service Model. I am sure I will think about it often for years to come.
Imagine a world where “Everything was piles. Piles of bricks and shattered lumps of concrete and twisted rods of rebar. Enough fine-ground fragments of glass to make a whole razory beach. Shards of fragmented plastic like tiny blunted knives. A pall of ashen dust. And, to this very throne of entropy, someone had brought more junk.”
This is Earth outside a few remaining enclaves. And all served by robots, millions of robots.
Robots: like our protagonist (although he would firmly resist such a designation) Uncharles, who has been programmed to be a valet, or gentleman’s gentlerobot; or librarians tasked with preserving as much data from destruction or unauthorised editing as possible; or robots preventing truancy from the Conservation Farm Project where some of the few remaining humans are conscripted to reenact human life before robots; or the fix-it robots; or the warrior robots prosecuting endless wars.
Uncharles, after slitting the throat of his human master for no reason that he can discern, travels this landscape with his hard-to-define-and-impossible to-shut-up companion The Wonk, who is very good at getting into places but often not so good at extracting herself. Until they finally arrive in God’s waiting room and take a number.
Along the way The Wonk attempts to get Uncharles to accept that he has been infected with a Protagonist Virus, which has given Uncharles free will. And Uncharles finds his prognosis routines increasingly unhelpful to him as he struggles to square the world he is perambulating with the internal model of it he carries inside him.
The questions that bounce back between our two unauthorised heroes are many and various, but revolve around:
Is there meaning beyond completing your task list or fulfilling the function for which you were programmed?
What is the purpose of a gentleman’s gentlerobot when there are no gentlemen left?
Is the appearance of emotion in some of Uncharles’ actions and communications really just an increasingly desperate attempt to reduce inefficient levels of processing time? Or is the Protagonist Virus an actual thing?
Ultimately the question is: what is it all for? And when they finally arrive in front of God, the question is thrown back at us, the pile of dead humans rotting across the landscape of all our trash.
This got me thinking about a few things in a different way. One of these was AI.
Suppose AI is half as useful as OpenAI and others are telling us it will be. Suppose that we can do all of these tasks in less than half the time. How is all of that extra time going to be distributed? In 1930 Keynes speculated that his grandchildren would only need to work a 15 hour week. And all of the productivity improvements he assumed in doing so have happened. Yes still full-time work remains the aspiration.
There certainly seems to have been a change of attitude from around 1980 onwards, with those who could choose choosing to work longer, for various reasons which economists are still arguing about, and therefore the hours lost were from those who couldn’t choose, as The Resolution Foundation have pointed out. Unfortunately neither their pay, nor their quality of work, have increased sufficiently for those hours to meet their needs.
So, rather than asking where the hours have gone, it probably makes more sense to ask where the money has gone. And I think we all know the answer to that one.
When Uncharles and The Wonk finally get in to see God, God gives an example of a seat designed to stop vagrants sleeping on it as the indication it needed of the kind of society humans wanted. One where the rich wanted not to have to see or think about the poor. Replacing all human contact with eternally indefatigable and keen-to-serve robots was the world that resulted.
Look at us clever humans, constantly dreaming of ways to increase our efficiency, remove inefficient human interaction, or indeed any interaction which cannot be predicted in advance. Uncharles’ seemingly emotional responses, when he rises above the sea of task-queue-clutching robots all around him, are to what he sees as inefficiency. But what should be the goal? Increasing GDP can’t be it, that is just another means. We are currently working extremely hard and using a huge proportion of news and political affairs airtime and focus on turning the English Channel into the seaborne equivalent of the seat where vagrants and/or migrants cannot rest.
So what should be the goal? Because the reason Service Model will stay with me for some time to come is that it shows us what happens if we don’t have one. The means take over. It seems appropriate to leave the last word to a robot.
“Justice is a human-made thing that means what humans wish it to mean and does not exist at all if humans do not make it,” Uncharles says at one point. “I suggest that ‘kind and ordered’ is a better goal.”
Meanwhile, Microsoft have recently produced a report into the occupational implications of generative AI and their top 40 vulnerable roles looks like this (look at where data scientist, mathematician and management analyst sit – all noticeably more replaceable by AI than model which caused all the headlines when Vogue did it last week):
So this looks like a process well underway rather than a theoretical one for the future. But I want to imagine a few years ahead. Imagine that this process has continued to gut what we now regard as entry level jobs and that the warning of Dario Amodei, CEO of AI company Anthropic, that half of “administrative, managerial and tech jobs for people under 30” could be gone in 5 years, has come to pass. What then?
Well this is where it gets interesting (for some excellent speculative fiction about this, the short story Human Resources and novel Service Model by Adrian Tchaikovsky will certainly give you something to think about), because there will still be a much smaller number of jobs in these roles. They will be very competitive. Perhaps we will see FBI kind of recruitment processes becoming more common for the rarified few, probably administered by the increasingly capable systems I discuss below. They will be paid a lot more. However, as Cory Doctorow describes here, the misery of being the human in the loop for an AI system designed to produce output where errors are hard to spot and therefore to stop (Doctorow calls them, “reverse centaurs”, ie humans have become the horse part) includes being the ready made scapegoat (or “moral crumple zone” or “accountability sink“) for when they are inevitably used to overreach what they are programmed for and produce something terrible. The AI system is no longer working for you as some “second brain”. You are working for it, but no company is going to blame the very expensive AI system that they have invested in when there is a convenient and easily-replaceable (remember how hard these jobs will be to get) human candidate to take the fall. And it will be assumed that people will still do these jobs, reasoning that it is the only route to highly paid and more secure jobs later, or that they will be able to retire at 40, as the aspiring Masters of the Universe (the phrase coined by Tom Wolfe in The Bonfire of the Vanities) in the City of London have been telling themselves since the 1980s, only this time surrounded by robot valets no doubt.
But a model where all the gains go to people from one, older, generation at the expense of another, younger, generation depends on there being reasonable future prospects for that younger generation or some other means of coercing them.
In their book, The Future of the Professions, Daniel and Richard Susskind talk about the grand bargain. It is a form of contract, but, as they admit:
The grand bargain has never formally been reduced to writing and signed, its terms have never been unambiguously and exhaustively articulated, and noone has actually consented expressly to the full set of rights and obligations that it seems to lay down.
Atul Gawande memorably expressed the grand bargain for the medical profession (in Better) as follows:
The public has granted us extraordinary and exclusive dispensation to administer drugs to people, even to the point of unconsciousness, to cut them open, to do what would otherwise be considered assault, because we do so on their behalf – to save their lives and provide them comfort.
The Susskinds questioned (in 2015) whether this grand bargain could survive a future of “increasingly capable systems” and suggested a future when all 7 of the following models were in use:
The traditional model, ie the grand bargain as it works now. Human professionals providing their services face-to-face on a time-cost basis.
The networked experts model. Specialists work together via online networks. BetterDoctor would be an example of this.
The para-professional model. The para-professional has had less training than the traditional professional but is equipped by their training and support systems to deliver work independently within agreed limits. The medical profession’s battle with this model has recently given rise to the Leng Review.
The knowledge engineering model. A system is made available to users, including a database of specialist knowledge and the modelling of specialist expertise based on experience in a form that makes it accessible to users. Think tax return preparation software or medical self-diagnosis online tools.
The communities of experience model, eg Wikipedia.
The embedded knowledge model. Practical expertise built into systems or physical objects, eg intelligent buildings which have sensors and systems that test and regulate the internal environment of a building.
The machine-generated model. Here practical expertise is originated by machines rather than by people. This book was written in 2015 so the authors did not know about large language models then, but these would be an obvious example.
What all of these alternative models had in common of course was the potential to no longer need the future traditional model professional.
There is another contract which has never been written down: that between the young and the old in society. Companies are jumping the gun on how the grand bargain is likely to be re-framed and adopting systems before all of the evidence is in. As Doctorow said in March (ostensibly about Musk’s DOGE when it was in full firing mode):
AI can’t do your job, but an AI salesman (Elon Musk) can convince your boss (the USA) to fire you and replace you (a federal worker) with a chatbot that can’t do your job
What strikes me is that the boss in question is generally at least 55. As one consultancy has noted:
Notably, the youngest Baby Boomers turned 60 in 2024—the average age of senior leadership in the UK, particularly for non-executive directors. Executive board directors tend to be slightly younger, averaging around 55.
Assume there was some kind of written contract between young and old that gave the older generation the responsibility to be custodian of all of the benefits of living in a civilised society while they were in positions of power so that life was at least as good for the younger generation when they succeeded them.
Every time a Baby Boomer argues that the state pension age increases because “we” cannot afford it, he or she is arguing both for the worker who will then be paying for his or her pension to continue to do so and that they should accept a delay in when they will get their quid pro quo, with no risk that the changes will be applied to the Boomer as all changes are flagged many years in advance. That contract would clearly be in breach. Every Boomer graduate from more than 35 years ago who argues for the cost of student loans to increase when they never paid for theirs would break such a contract. Every Boomer homeowner who argues against any measure which might moderate the house price inflation which they benefit from in increased equity would break such a contract. And of course any such contract worth its name would require strenuous efforts to limit climate change.
And a Boomer who removes a graduate job to temporarily support their share price (so-called rightsizing) in favour of a necessarily not-yet-fully-tested (by which I mean more than testing the software but also all of the complicated network of relationships required to make any business operate successfully) system then the impact of that temporary inflation of the share price on executive bonuses is being valued much more highly than both the future of the business and of the generation that will be needed to run it.
This is not embracing the future so much as selling a futures contract before setting fire to the actual future. And that is not a contract so much as an abusive relationship between the generations.
I want to talk about The Future by Naomi Alderman. I read it last year, after wandering around the Hay Festival bookshop moaning that they don’t do science fiction and then coming across Naomi’s book and realising I had just missed her being interviewed. Then I watched the interview and bought both The Future and The Power (which I will talk about at some future date, but which is equally terrific).
The book is about Lenk Sketlish, CEO of the Fantail social network, Zimri Nommik, CEO of the logistics and purchasing giant Anvil, Ellen Bywater, CEO of Medlar Technologies, the world’ most profitable personal computing company, and the people working for them, and the people linked with those people. Zimri, Ellen and Lenk are at least as monstrous as Jeff, Sundar, Elon, Tim and Mark. And they are all preparing for the end of the world.
(If you need to remind yourself what Elon, Jeff, Mark and Sundar all look like milling around, below is a link to Trump’s inauguration:
Anvil is set up with alerts for signs of the end of the world being reported anywhere: giant hailstones, plague of locusts, Mpox, rain of blood which turned out to be a protest for menstrual equity involving blood-soaked tampons being thrown at Lenk and co as they emerged from a courthouse in Washington. The information Zimri, Ellen and Lenk have on everybody else in the world makes them feel all seeing, all hearing, all knowing. Combined with riches unknown to anyone before in history it makes them feel invulnerable, even to the end of the world, even to each other. Which turns out, of course, to be their decisive vulnerability.
It takes in survivalism, religious cults and wraps it all up in a thriller plot which is absolutely the kind of science fiction you want to be reading now instead of listening out for the latest antics of the horse in the hospital. And it was all written over a year before Elon even started with DOGE. The Future by Naomi Alderman is a fantastic read, particularly if you would like to see someone like Musk get an appropriate end to his story. I obviously won’t spoil it by saying what that is, but I don’t think I would be giving anything away by saying rockets are involved!
Came across this on YouTube today and it was such a brilliant discussion in the same area as my post from yesterday (which went out before I had seen this), but which went much further in a number of really interesting directions, that I thought many of you would be interested. Look out for a mention early in the video for the late great Iain Banks, science fiction fans!
Happy new year to everyone who reads this blog! I am planning for there to be quite a lot more activity here in 2025, moving from an average of one article a month to at least weekly. There should be more cartoons too – Pinhead and Spikes even made it to our Christmas cake this year.
There is a lot I want to write about this year. Expect some or all of the following themes in the next few months (in no particular order):
Some examples using Steve Keen’s Ravel software to demonstrate how Government debt is not the constraint they think it is.
Extending Naomi Alderman’s argument in The Future that we could get rid of the Tech Bros and not miss them, effectively upending Ayn Rand’s ideas in Atlas Shrugged. They are not key workers.
Keynes’ argument that, with the future so uncertain, we should not sacrifice people in the present to our models of it.
Spiegelhalter on the four types of luck, which cuts away at the meritocracy argument for distributing wealth.
How the professions have become a way of solidifying and enabling the massively uneven distribution we see. Have they outgrown their usefulness in their current form, just like the guilds did?
How the choice for providing public goods appears to boil down to public ownership or private monopoly – with accompanying Technofeudalism replacing capitalism. Why are we so much more relaxed about private monopolies than we were 100 years ago, when it accelerates inequalities so much?
The relationship between worldbuilding in science fiction and people living in their own models in the policy making world. Great example of this just this morning in the FT.
So plenty to do. If this sounds interesting to you, please stick with the blog, which will not be going to Substack and will not be charging a subscription. If it sounds really interesting to you, tell a friend! Will be in touch again soon.
Imagine a super-hero who could not be killed. No I don’t mean Deadpool. A more apt name for our super-hero would be Deadmeat. Deadmeat is empirically dead, but, rather like the Monty Python parrot, is being energetically kept alive by the pretence of its continued existence amongst all of those around it. So much so that it becomes impolite to expose the pretence and point out that Deadmeat is in fact dead. If you really push, and someone likes you enough to want to give you an explanation, you will have a hand put on your shoulder and be led away to a corner to have the pretence explained to you. What that explanation turns out to be is something like this. Deadmeat is of course the Paris climate agreement from 2015 which committed 193 countries plus the EU to “pursue efforts” to limit global temperature rises to 1.5C, and to keep them “well below” 2.0C above those recorded in pre-industrial times.
Deadmeat, it turns out, wasn’t shot. Deadmeat was overshot. Under overshoot, we bring the terrible thing back under control after it has done the damage and hope we can fix the damage at a later date. It’s a bit like the belief in cryopreservation or uploading our brains into cyberspace in the hope that we can have our bodies fixed with future medicine or be provided with artificial bodies. It means relying on science fiction to save us.
Andreas Malm and Wim Carton have considered this approach and how we got here in their latest book Overshoot. For me there are two big ideas in this book, although the account of how things definitively got away from us immediately post pandemic and exactly how that played out is mesmerising too. I thoroughly recommend a read.
The first big idea is the problem with the justification for overshoot in the first place, which is that at some point in the future we will be so much richer and more technologically advanced that it will be much easier to bring carbon dioxide levels down to sustainable levels than to try and stay within sustainable levels now. In what they call “The Contradiction of the Last Moment” Malm and Carton show how an intense fresh round of fossil fuel investment is almost certain to occur close to a temperature deadline (ie fossil fuel companies rushing to build more infrastructure while it is still allowed), whether it is 1.5 or 2 degrees or something higher. Then, as they put it “the interest in missing it will be overwhelmingly strong”. If an investment is 40 or 50 years old, then it might not be so disastrous to have it retired, but if a fossil fuel company has invested billions in the last few years in it? They will fight tooth and nail to keep it open and producing. And by prolonging the time until the retirement of fossil fuel infrastructure, the capital which has used the time to entrench its position and now owns a thousand new plants rather than a few hundred will be in a much stronger position to dictate policy. The longer we leave it, they argue, the harder it will become to retire fossil fuels, not easier.
The second big idea explains why, despite the enormous price collapse of solar power in particular, there is no Big Solar to compete with Big Oil. As they put it “there was no Microsoft or Apple or Facebook. More broadly, there was no Boulton & Watt of the flow, no Edison Machine Works, no Ford factories, no ascendant clusters of capital accumulation riding this wave.” The only remotely comparable company would be Tesla, but they produced cars. Why is this?
Malm and Carton talk about “the scissor”, the difference between the stock of the fossil fuel industry and the flow of renewable power. Fossil fuel’s “highly rivalrous goods: the consumption of one barrel of oil or one wagon-load of coal means that no one can ever consume it again. Every piece of fossil fuel burns once and once only. But supplies of sunlight and wind are in no way affected by any one consumer’s use.”
And this is the key I think. What economists call “public goods”, goods which are non-rivalrous (ie your use of the sun’s energy does not stop somebody else’s unless you put them in the shade) and non-excludable (ie you cannot easily stop someone else from using it, in this case by sticking a solar panel on their roof), are very difficult if not impossible to make a profit from. Private markets will therefore not provide these goods, possibly at all without extremely artificial regulation (something we have probably had enough of with our utilities in the UK) and certainly not in the quantity that will be required.
In Postcapitalism, Paul Mason discussed the options when the price mechanism disappears and additional units of output cannot be charged for. As he put it:
Technologically, we are headed for zero-price goods, unmeasurable work, an exponential takeoff in productivity and the extensive automation of physical processes. Socially, we are trapped in a world of monopolies, inefficiency, the ruins of a finance-dominated free market and a proliferation of “bullshit jobs”.
This also ties in with my own experience and others I have spoken to over the years about how hard it is to invest outside of fossil fuels and make a return.
Therefore if the private sector will not provide public goods and renewable power is predominantly a public good, then it follows that renewable power needs to be in public ownership. And if the climate crisis requires all power to be renewable and zero carbon, which it does, then it also follows that the entire power sector ultimately needs to be in public ownership too.
And then the motivation for overshoot becomes clear and how high the stakes are: not just the proceeds of the sale from one dead parrot as it turns out, but the future of private power generation. My fear is that the Deadmeat franchise may end up having as many sequels as Godzilla (38 and counting). With the potential to do rather more damage in the process.