The Actuary magazine recently had a debate about whether the underlying data or the story you wove around it was more important. I’m not sure there is always a clear distinction between the two, as Dan Davies rather neatly illustrates here, but my view is that, if a binary choice has to be made, it is always going to be the story. And there was a great example of this which popped up recently in the FT.

The FT article was ‘Is university still worth it?’ is the wrong question, by John Burn-Murdoch, with great graphs as usual by John. However, as is sometimes the case, I feel that a very different and more convincing story could be wrapped around the same datasets he is showing us.

The article’s thesis is as follows:

The graduate earnings premium, ie how much more on average graduates earn than non-graduates, has only fallen in the UK as the proportion going to university has risen. It has risen in other countries:

In the UK, we have had much weaker productivity growth than the other comparator countries, and also “the steady ramping up of the minimum wage has squeezed the earnings premium from the lower end too”:

We have also had a much smaller increase in the percentage of managerial and professional jobs than a different group of comparator countries (they haven’t mentioned Germany before), meaning graduates are forced to take lower salaried jobs elsewhere:

So the answer according to the FT? We should focus on economic growth rather than “tweaking” higher education intake and funding. Then graduate earnings would be higher, student loans could be more generous(!) and students would have more chance of getting a good job.

Well perhaps. But here’s a different framing of the same data that I find more persuasive.

Let’s start by addressing that point about the minimum wage. According to the House of Commons Library report on this, the UK’s minimum wage is broadly comparable to that of France and the Netherlands, although higher than Canada’s and much higher than that of the United States. The employers who are the FT’s constituency would obviously like us lower down this particular chart:

The main economic framing here is the progress myth of the UK’s business community: economic growth. All problems can be solved if we can just get more economic growth. Apparently we need more inequality in pay between graduates and non-graduates which we can get by generating more economic growth. This is honest of them at least, although I don’t see much evidence that the economic growth they crave will go into skilled job creation rather than stock buy backs (according to Motley Fool, “Companies spent $249 billion on stock buybacks in Q3 2025, and $777 billion over the first three quarters of 2025.”).

There are a lot of problems with framing every economic question with respect to economic growth, memorably illustrated by Zack Polanski of the Green Party in this less than 3 minute video recently (I strongly recommend you watch it before you read on – click on the read in browser link if you can’t see it):

Economic growth is increasingly without purpose, wasteful of energy and poorly distributed. It is chasing outputs, literally any outputs, whatever the cost to the environment, our health system, our education system, our social support systems and our communities. Looking at the framing above, you can see that economic growth as currently pursued will always see anything which stops the concentration of wealth amongst the already wealthy, like a higher national minimum wage or a totally made-up concept like a lower graduate earnings premium (which in itself is a framing trying to make reducing inequality seem undesirable) as a problem. Lack of productivity growth, itself a proxy for this kind of economic growth (because if you ask why we need more productivity the answer is always to get more economic growth), is usually directed as a criticism at “lazy” UK workers, rather than under-investing and over-extracting UK business owners.

But what if, instead of economic growth, your progress myth was reducing inequality? Or growing equality within the economy?

Source: World Inequality Database wid.world

If you focused on inequality rather than economic growth, then you would find it correlates with everything we say we don’t want. Unlike economic growth, having equality as an aim actually has the advantage of having an evidence base for the claim that it improves society:

Source: https://media.equality-trust.out.re/uploads/2024/07/The-Spirit-Level-at-15-2024-FINAL.pdf

If you focused on inequality, then you would be pleased that we have had an increase in our minimum wage. You would think that the same FT article’s admission that UK graduates’ skills levels are higher than those in the United States was more important than something called a graduate earnings premium.

Burn-Murdoch is right to say asking whether university is worth it is the wrong question.

However economic growth is the wrong answer.

And I thought I would probably be stopping there for this week. But then something odd happened. A “Thought Exercise” set in June 2028 “detailing the progression and fallout of the Global Intelligence Crisis” (ie science fiction), published on 23 February, may have tanked the share price of IBM later that day. The fall definitely happened, with IBM’s share price falling 13%, its biggest fall since 2000, alongside smaller falls in other tech stocks.

Source: https://markets.ft.com/data/equities/tearsheet/summary?s=IBM:NYQ

According to the FT:

Investors have recently seized on social media rumours and incremental developments by small AI companies to justify further selling, with a widely circulated blog post by Citrini Research over the weekend describing how AI could hypothetically push the US unemployment rate above 10 per cent by 2028, proving the latest catalyst.

The likelihood of the scenario portrayed is difficult to assess, but the speed with which the total economic collapse happens subsequently as described feels unlikely if not impossible. However the fact that the markets are this jittery tells us something I think. As Carlo Iacono puts it:

We are living through a period in which the gap between “plausible narrative” and “tradeable signal” has collapsed to nearly nothing. When a scenario feels real enough to model, and the underlying anxiety is already there waiting to be organised, fiction and forecast become functionally indistinguishable.

The data underlying the markets hasn’t changed, but the story has. I rest my case.

Het Scheepvaartmuseum, Amsterdam, in the fog. Another museum which is well worth a visit

To be read to the accompaniment of Lindisfarne singing Fog on the Tyne, or possibly Kate Bush singing The Fog.

Reporting on AI is all over the place, in both meanings of that phrase. Some think it is very dangerous but that the people working on it should be trusted to police it themselves. Some are retreating from prediction but are instead trying to draw a coastline “knowing the interior is mostly fog”. Some are playing war games in the Arctic with different LLMs. But everyone seems fairly confident they have a hot take. I wonder.

The book I finished this weekend had a passage about a first experiment with a new substance which could shield against gravity. Mr Cavor, the rather unworldly scientist, is explaining to Mr Bedford, a man with no obvious talents other than to look for a quick buck where he can find one, what would have happened if his substance, Cavorite, had not got dislodged fairly quickly from where they had positioned it:

“You perceive,” he said, “it formed a sort of atmospheric fountain, a kind of chimney in the atmosphere. And if the Cavorite itself hadn’t been loose and so got sucked up the chimney, does it occur to you what
would have happened?”

I thought. “I suppose,” I said, “the air would be rushing up and up over that infernal piece of stuff now.”

“Precisely,” he said. “A huge fountain—”

“Spouting into space! Good heavens! Why, it would have squirted all the atmosphere of the earth away! It would have robbed the world of air! It would have been the death of all mankind! That little lump of stuff!”

“Not exactly into space,” said Cavor, “but as bad—practically. It would have whipped the air off the world as one peels a banana, and flung it thousands of miles. It would have dropped back again, of course—but on an asphyxiated world! From our point of view very little better than if it never came back!”

I stared. As yet I was too amazed to realise how all my expectations had been upset. “What do you mean to do now?” I asked.

“In the first place if I may borrow a garden trowel I will remove some of this earth with which I am encased, and then if I may avail myself of your domestic conveniences I will have a bath. This done, we will converse more at leisure. It will be wise, I think”—he laid a muddy hand on my arm—“if nothing were said of this affair beyond ourselves. I know I have caused great damage—probably even dwelling-houses may be ruined here and there upon the country-side. But on the other hand, I cannot possibly pay for the damage I have done, and if the real cause of this is published, it will lead only to heartburning and the obstruction of my work. One cannot foresee everything, you know, and I cannot consent for one moment to add the burden of practical considerations to my theorising…”

The extract is, of course, from HG Wells’ classic The First Men in the Moon, published in 1901.

In case you are in any doubt, Dario Amodei is our Mr Cavor here. I can just imagine his response to the first disaster attributed to AI research being prefaced by “one cannot foresee everything, you know…”. And there are too many Mr Bedfords out there to shake a stick at, trying to sell you anything they can possibly attribute to AI just to keep the whole thing rolling along.

I am with the fog people. The FT seem to be too, with this pair of diagrams attached to this article.

First the US, where there are tentative signs of something they can possibly use as a proxy for productivity growth as a result of using AI:

Source: https://www.ft.com/content/d6fdc04f-85cf-4358-a686-298c3de0e25b

And this one for the UK, where there aren’t:

And so it was this foggy sensibility about AI which I took with me to the Bletchley Park Museum last weekend, site of the AI Safety Summit in November 2023 which drew in the US Vice President, Kamala Harris, European Commission President Ursula von der Leyen, Elon Musk, then UK Prime Minister Rishi Sunak, Open AI’s Sam Altman, Meta’s Nick Clegg and Prof Yann LeCun, Meta’s chief AI scientist, amongst around 100 guests invited to suck their teeth about AI.

The thing that particularly struck me at Bletchley Park is that it demystified the emergence of the computer for me. The forerunner, which was the mechanisation using punch cards of the process of sorting the massive amounts of data the centre was receiving in war time, smacks of a group of people who had just run out of wall to spread their webs of cards and strings across. It was a crime investigation which had got out of hand.

A highlight for me was Alan Turing’s very prescient little note about AI, written in 1940 but anticipating the arguments which would be raging by 2026 (and how poignant that the man who probably did more than anyone to transform what we are able to do by punching a keyboard was chained to one that could only press hunks of metal against a strip of carbon onto a piece of paper):

There is also a hilarious secrecy pledge from the ancestors of the safety summit people, telling you all the ways in which you just need to shut up:

“There is an English proverb none the worse for being seven centuries old:” it thunders.

Wicked tongue breaketh bone,

Though the tongue itself hath none.

Words to live by, I’m sure we’d all agree.

What Bletchley Park was less good at was explaining how the Enigma code was cracked, despite an excellent collection of the hardware involved. For that, I recommend Simon Singh’s The Code Book.

Here was the world’s first “intelligence factory”, scaling up intelligence gathering and analysis as never before and by so doing also changing the way governments would interact with their populations, with just as many implications for our current times as the development of AI. This cluster of huts around a country house rebranded as GCHQ and moved to Cheltenham a few years after World War 2.

Path dependence is a term which describes a situation where past events or decisions constrain later events or decisions. Bletchley Park feels like the Museum of Path Dependence to me.

And the legacy of the safety summit? Well my “hot take” would be: when you are a little lost in the fog, it is generally advisable to slow down a bit and take steps to reduce your risk of breaking things. I wonder if I can get that on a bumper sticker.

In ordering #5, self-driving cars will happily drive you around, but if you tell them to drive to a car dealership, they just lock the doors and politely ask how long humans take to starve to death. Source: https://m.xkcd.com/1613/

To be read to the soundtrack of Bruce Springsteen singing Streets of Minneapolis.

My attention was drawn this week to an article by Dario Amodei, co-founder of Anthropic (a spin off from OpenAI, which was co-founded by Elon Musk and heavily invested in by Microsoft so very much part of the Magnificent 7 architecture), the creator of the large language model Claude, called The Adolescence of Technology. It is hard to overemphasise how much I disagree with everything Dario has written here, but also useful in that it is a long article, which covers a lot of ground, and allows me to define my views in opposition to it.

The irritations start pretty much straight away. So Dario quotes from a science fiction classic (Carl Sagan’s First Contact), but then follows this up under the heading of “Avoid doomerism” with this:

…but it’s my impression that during the peak of worries about AI risk in 2023–2024, some of the least sensible voices rose to the top, often through sensationalistic social media accounts. These voices used off-putting language reminiscent of religion or science fiction, and called for extreme actions without having the evidence that would justify them.

Notice the word “sensible” doing the heavy lifting there. Only science fiction endorsed by Dario will be considered. Dario wants us to consider the risks of AI in “a careful and well-considered manner”, which sounds reasonable, but then his 3rd and final bullet under this (after “avoid doomerism” and “acknowledge uncertainty”) goes as follows:

Intervene as surgically as possible. Addressing the risks of AI will require a mix of voluntary actions taken by companies (and private third-party actors) and actions taken by governments that bind everyone. The voluntary actions—both taking them and encouraging other companies to follow suit—are a no-brainer for me. I firmly believe that government actions will also be required to some extent, but these interventions are different in character because they can potentially destroy economic value or coerce unwilling actors who are skeptical of these risks (and there is some chance they are right!).

So reflexively anti regulation of his own industry, of course. And voluntary actions by corporations, an approach to solving problems which has been demonstrated not to work repeatedly, is apparently “a no-brainer”. Also it is automatically assumed that government actions will destroy value. Only market solutions will be endorsed by Dario, pretty much until they have messed up so badly you are forced to bring governments in:

To be clear, I think there’s a decent chance we eventually reach a point where much more significant action is warranted, but that will depend on stronger evidence of imminent, concrete danger than we have today, as well as enough specificity about the danger to formulate rules that have a chance of addressing it. The most constructive thing we can do today is advocate for limited rules while we learn whether or not there is evidence to support stronger ones.

There is then the expected sales pitch about what he has seen within Anthropic about the relentless “increase in AI’s cognitive capabilities”. And then the man who warned about sensationalist science fiction is off:

I think the best way to get a handle on the risks of AI is to ask the following question: suppose a literal “country of geniuses” were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist.

And the rest of the article is then off solving this imaginary problem in all its facets, rather than the wealth and power concentration problem that we actually have. The only legislation he seems to be in favour of seems to be something called “transparency legislation”, legislation which of course Anthropic would help to write.

However, after suggesting everything from isolating China and using “AI to empower democracies to resist autocracies” to private philanthropy as the solutions to his imagined problems, Dario finally and reluctantly concludes government intervention might after all be necessary as follows:

…ultimately a macroeconomic problem this large will require government intervention. The natural policy response to an enormous economic pie coupled with high inequality (due to a lack of jobs, or poorly paid jobs, for many) is progressive taxation. The tax could be general or could be targeted against AI companies in particular. Obviously tax design is complicated, and there are many ways for it to go wrong. I don’t support poorly designed tax policies. I think the extreme levels of inequality predicted in this essay justify a more robust tax policy on basic moral grounds, but I can also make a pragmatic argument to the world’s billionaires that it’s in their interest to support a good version of it: if they don’t support a good version, they’ll inevitably get a bad version designed by a mob.

That, by the way, is what Dario thinks of democracy: “a bad version designed by a mob” rather than the “good version” that he and his fellow billionaires could come up with in their own self interest. The mask has really slipped by this point. And the following section, on “Economic concentration of power”, just demonstrates that he has no effective answers at all that he deems acceptable on this. It’s just an inevitability for him.

This is what Luke Kemp’s excellent Goliath’s Curse refers to as a “Silicon Goliath”. Goliaths are dominance hierarchies which spread by dominating the areas around them. They need three conditions (which Luke calls “Goliath fuel”): lootable resources (ie resources which can be easily stolen off someone else), caged land (ie land difficult to escape from) and monopolizable weapons (ie ones which require processes which can be developed to give one society an edge over another). We are all Goliath-dwellers in “The West” now, looting resources from other countries in unequal exchanges which impoverish the Global South, with weapons (eg nuclear weapons) available only to the elite few countries and operating within the cages of heavily-policed national boundaries. The Silicon Goliath which is developing will have data as its lootable resource, mass surveillance systems providing its cages and monopolizable weapons such as killer drones. The resultant killbot hellscapes which people like Dario Amodei laughably imagine they have defences against through things like their Claude’s Constitution are almost pitiful in their inadequacy.

Nate Hagens takes Dario’s claims for AI’s cognitive capabilities much more seriously than me, and then considers the risks in a less adolescent way here. As he says:

And here’s what his essay has almost nothing about. Energy, water, materials, or ecological limits.

And also nowhere does Dario talk about the 99% of people who are just spectators in his world, other than to describe them as “the mob”. This is quite a blind spot, as Luke Kemp points out in his exhaustive study of the collapses of “Goliaths” over the last 5,000 years. “The extreme levels of inequality” predicted by Amodei in his essay are not just things we have to put up with, but the reasons the world he predicts is likely to be hugely unstable. Not created by AI, but accelerated by it. Kemp describes it as “diminishing returns on extraction”:

We see a pattern re-emerging across case studies. Societies grow more fragile over time and more prone to collapse. Threats that they had always faced such as invaders, disease and drought seem to take a heavier toll.

As societies grew bigger:

They still faced the underlying (and ongoing) problem of rising inequality creating societies where and institutions more extractive power was more concentrated.

And eventually:

The result is more extractive institutions creating growing instability, internal conflict, a drain of resources away from government, state capture by private elites, and worse decision-making. Society – especially the state – becomes more fragile. Private elites tend to take a larger share of extractive benefits. The state, and many of the power structures it helps prop up, then usually falls apart once a shock hits: for Rome it was climate change, disease, and rebelling Germanic mercenaries; for China it was often floods, droughts, disease and horseback raiders; for the west African kingdoms it was invaders and a loss of trade; for the Maya it was drought and a loss of trade; and for the Bronze Age it was drought, a disruption of trade and an earthquake storm.

The only real answer to combatting existential risks in the hands of adolescents like the Tech Bros is more democracy: over control of decision-making, over control of resources, over control of the threat of violence and over control of information. We are a long way from achieving these within our own particular Goliath at the moment, and indeed there is no sign at all that our elites are interested in achieving them. The Magnificent 7 are propping up the US stock exchange. The promise of perpetual economic growth is the progress myth of our time and leaders who do not provide it will lose the “Mandate of Heaven” in just the same way as Chinese rulers did when they were unable to prevent floods and droughts. Adam Tooze sees the signs of the inner demons of our elites starting to detach them from reality in the latest disclosures from the Epstein files:

Are we, like [Larry] Summers, fantasizing about stabilizing our desires and needs in an inherently dangerous and uncertain world? Are we kidding ourselves?

But, without those controls in place, we would need a lot more than Dario’s Anthropic playing nicely to allow this particular adolescent to grow up. And this is where I am forced to take Nate Hagens’ assessment more seriously. Because if our rulers’ Mandates of Heaven are dependent on eternal economic growth on their watch and they, rightly, think that this is not possible in our current non-AI-enhanced world but, wrongly, think it is possible in a future AI-enhanced world, then that is the way they are going to demand we go. And, if the Larry Summers fantasists really are kidding themselves, it may be very hard to talk them out of it.

Source: https://dictionary.cambridge.org/dictionary/english/tldr

So, as a way of signing off until next year, I thought I would write something short about length.

My first job was with De La Rue and, specifically, within their print division which was still named after the original founder, sometime straw hat and playing card manufacturer and Guernseyman, Thomas De La Rue. Or TDLR for short. As a result of which I can never see tl;dr written anywhere (and it does seem to be everywhere on social media these days as the amount of written material to work our way through becomes ever more overwhelming) without thinking of my first years of employment, which momentary distraction, I assume, is the complete opposite of what tl;dr is often designed for, which is to help you understand something you don’t have time to read.

It feels like there is a shift happening in the etiquette of social media on this. Only recently I saw a response to a piece which was not particularly long which started “Don’t have time to read but probably agree as follows…”. This seemed rude to me but perhaps I am being old-fashioned about this. Because there are a lot of writers now where I am regularly skimming them or only reading the first halves of their articles. Writers who often have a really good point, but appear to want to say it in as many different ways as possible, nailing every single example imaginable for completeness. But really, who values completeness? I think what we are looking for is careful selection from someone who knows something we don’t about the terrain and who can therefore guide us through at least a swamp or two before leaving us to the next writer. If we wanted completeness, we could stumble into every sink hole for ourselves.

I did a mini review for Service Model by Adrian Tchaikovsky as a blog post recently which got the following response from the author which I was very chuffed about:

Fascinating (and spot on) little essay on Service Model and how it relates to the real world.

My wife (the one who calls me Swampy Dave sometimes) said “aren’t you a little insulted by the reference to a ‘little essay’?” and I realised that I wasn’t at all. Quite the reverse. I had managed to say something which had a point to it and which others could understand and all within 850 words. If I had to encapsulate why I blog in a sentence that would be it.

Returning to Tchaikovsky, he arranges his books on his website between novels, novellas, shorts and free. People appear to differ about how long each form should be, but Tchaikovsky described a novella as having a beginning and an end but no middle (section 6 of the interview here), which tended to pursue one idea to its logical conclusion. A short story took him a week to write. Everything else is a novel.

Definitions vary, this source defined the different forms as follows:

  • Flash fiction: under 1000 words (although a lot of competitions stipulate maximum 500 words)
  • Short stories: 3,500-7,500 words
  • Novelette (yes I know! I hadn’t heard of this before either!): 7,500-17,000 words
  • Novella: 17,000-40,000 words
  • Novel: 40,000 words plus

And then this other source helpfully listed the word count for 175 famous books.

Growing up I regarded War and Peace (finally slogged through it in the late 80s) as the ultimate long book but, at 561,304 words, it is not even close to being the longest, which appears to be Proust’s In Search of Lost Time or A la recherche du temps perdu (1,267,069 in English or slightly fewer in the original French), although it was published in 7 volumes originally. Meanwhile HG Wells’ The Time Machine, Orwell’s Animal Farm and Steinbeck’s Of Mice and Men are officially defined as novellas, despite being, in the view of many, some of the most important books ever written.

I am quite a slow reader, which is perhaps why the question of book length seems to be bothering me so much. I have therefore decided to try and restrict myself to novellas and shorter fiction for 2026 (although the non-fiction is likely to be as long as ever, until the concept of non-fictionella is embraced if ever!) in order to read a wider range of writers. Might also mean there are more book reviews here next year!

Have a great Christmas everyone! See you in 2026!

So this is my 42nd blog post of the year and the 8th where I have referenced Cory Doctorow. Thought it was more to be honest, so influential has he been on my thought, particularly as I have delved deeper into what, how and why the AI Rush is proceeding and what it means for the people exiting universities over the next few years.

Yesterday Cory published a reminder of his book reviews this year. He is an amazing book reviewer. There are 24 on the list this year, and I want to read every one of them on the strength of his reviews alone.

I would like to repay the compliment by reviewing his latest book: Enshittification (the other publication this year – Picks and Shovels – is also well worth your time by the way). Can’t believe this wasn’t the word of the year rather than rage bait, as it explains considerably more about the times we are living in.

I have been a fan of Doctorow for a couple of years now. I had had Walkaway sat on my shelves for a few years before I read it and was immediately enthralled by his tale of a post scarcity future which had still somehow descended into an inter-generational power struggle hellscape. I moved on to the Little Brother books, now being reenacted by Trump with his ICE force in one major US city after another. Followed those up with The Lost Cause, where the teenagers try desperately to bridge the gap across the generations with MAGA people, with tragic results along the way but a grim determination at the end “the surest way to lose is to stop running”. From there I migrated to the Marty Hench thrillers, his non-fiction The Internet Con (which details the argument for interoperability, ie the ability of any platform to interact with another) and his short fiction (I loved Radicalised, not just for the grimly prophetic Radicalised novella in the collection, but also the gleeful insanity of Unauthorised Bread). I highly recommend them all.

I came to Enshittification after reading his Pluralistic blog most days for the last year and a half, so was initially disappointed to find very little new as I started working my way through it. However what the first two parts – The Natural History and The Pathology – are is a patient explanation of the concept of enshittification and how it operates assuming no previous engagement with the term, all in one place.

Enshittifcation, as defined by Cory Doctorow, proceeds as follows:

  1. First, platforms are good to their users.
  2. Then they abuse their users to make things better for their business customers.
  3. Next, they abuse those business customers to claw back all the value for themselves.
  4. Finally, they have become a giant pile of shit.

So far, so familiar. But then I got to Part Three, explaining The Epidemiology of enshittification, and the book took off for me. The erosion of antitrust (what we would call competition) law since Carter. “Antitrust’s Vietnam” (how Robert Bork described the 12 years IBM fought and outspent the US Department of Justice year after year defending their monopolisation case) until Reagan became President. How this led to an opening to develop the operating system for IBM when it entered the personal computer market. How this led to Microsoft, etc. Then how the death of competition also killed Big Tech regulation ( regulating a competitive market which acts against collusion is much easier than regulating one with a small number of big players which absolutely will collude with each other).

And then we get to my favourite chapter of the book “Reverse-Centaurs and Chickenisation”. Any regular reader of this blog will already be familiar with what a reverse centaur is, although Cory has developed a snappy definition in the process of writing this book:

A reverse-centaur is a machine that uses a human to accomplish more than the machine could manage on its own.

And if that isn’t chilling enough for you, the description of the practices of poultry packers and how they control the lives of the nominally self-employed chicken farmers of the US, and how these have now been exported to companies like Amazon and Arise and Uber, should certainly be. The prankster who collected up the bottled piss of the Amazon drivers who weren’t allowed a loo break and resold it on Amazon‘s own platform as “a bitter lemon drink” called Release Energy, which Amazon then recategorised as a beverage without asking for any documentation to prove it was fit to drink and then, when it was so successful it topped their sales chart, rang the prankster up to discuss using Amazon for shipping and fulfillment – this was a rare moment of hilarity in a generally sordid tale of utter exploitation. My favourite bit is when he gets on to the production of his own digital rights management (DRM) free audio versions of his own books.

The central point of the DRM issue is, as Cory puts it, “how perverse DMCA 1201 is”:

If I, as the author, narrator, and investor in an audiobook, allow Amazon to sell you that book and later want to provide you with a tool so you can take your book to a rival platform, I will be committing a felony punishable by a five-year prison sentence and a $500,000 fine.

To put this in perspective: If you were to simply locate this book on a pirate torrent site and download it without paying for it, your penalty under copyright law is substantially less punitive than the penalty I would face for helping you remove the audiobook I made from Amazon’s walled garden. What’s more, if you were to visit a truck stop and shoplift my audiobook on CD from a spinner rack, you would face a significantly lighter penalty for stealing a physical item than I would for providing you with the means to take a copyrighted work that I created and financed out of the Amazon ecosystem. Finally, if you were to hijack the truck that delivers that CD to the truck stop and steal an entire fifty-three-foot trailer full of audiobooks, you would likely face a shorter prison sentence than I would for helping you break the DRM on a title I own.

DMCA1201 is the big break on interoperability. It is the reason, if you have a HP printer, you have to pay $10,000 a gallon for ink or risk committing a criminal offence by “circumventing an access control” (which is the software HP have installed on their printers to stop you using anyone else’s printer cartridges). And the reason for the increasing insistence on computer chips in everything from toasters (see “Unauthorised Bread” for where this could lead) to wheelchairs – so that using them in ways the manufacturer and its shareholders disapprove of becomes illegal.

The one last bastion against enshittification by Big Tech was the tech workers themselves. Then the US tech sector laid off 260,000 workers in 2023 and a further 100,000 in the first half of 2024.

In case you are feeling a little depressed (and hopefully very angry too) at this stage, Part 4 is called The Cure. This details the four forces that can discipline Big Tech and how they can all be revived, namely:

  1. Competition
  2. Regulation
  3. Interoperability
  4. Tech worker power

As Cory concludes the book:

Martin Luther King Jr once said, “It may be true that the law cannot make a man love me, but it can stop him lynching me, and I think that’s pretty important, also.”

And it may be true that the law can’t force corporate sociopaths to conceive of you as a human being entitled to dignity and fair treatment, and not just an ambulatory wallet, a supply of gut bacteria for the immortal colony organism that is a limited liability corporation.

But it can make that exec fear you enough to treat you fairly and afford you dignity, even if he doesn’t think you deserve it.

And I think that’s pretty important.

I was reading Enshittification on the train journey back from Hereford after visiting the Hay Winter Weekend, where I had listened to, amongst others, the oh-I’m-totally-not-working-for-Meta-any-more-but-somehow-haven’t-got-a-single-critical-word-to-say-about-them former Deputy Prime Minister Nick Clegg. While I was on the train, a man across the aisle had taken the decision to conduct a conversation with first Google and then Apple on speaker phone. A particular highlight was him just shouting “no, no, no!” at Google‘s bot trying to give him options. He had already been to the Vodaphone shop that morning and was on his way to an appointment which he couldn’t get at the Apple Store on New Street in Birmingham. He spotted the title of my book and, when I told him what enshittification meant, and how it might make some sense out of the predicament he found himself in, took a photo of the cover.

My feeling is that enshittification goes beyond Big Tech. It is the defining industrial battle of our times. We shouldn’t primarily worry about whether it is coming from the private or the public sector, as enshittification can happen in both places: from hollowing out justice to “paying more for medicines… at the exact moment we can’t afford to pay enough doctors to prescribe them” in the public sector, where we already reside within the Government’s walled garden, to all of the outrages mentioned above and more in the private sector.

The PFI local health hubs set out in last week’s budget take us back to perhaps the ultimate enshittificatory contracts the Government ever entered into, certainly before the pandemic. The Government got locked into 40 year contracts, took all the risk, and all the profit was privatised. The turbo-charging of the original PFI came out of the Blair-Brown government’s mania for keeping capital spending off the balance sheet in defence of Gordon Brown’s “Golden Rule” which has now been replaced by Rachel Reeves’ equally enshittifying fiscal rules. All the profits (or, increasingly, rents, as Doctorow discusses in the chapter on Varoufakis’ concept of Technofeudalism) from turning the offer to shit always seem to end up in the private sector. The battle is against enshittification from both private and, by proxy, via public monopolies.

Enshittification is, ultimately, a positive and empowering book which I strongly recommend you buy, avoiding Amazon if you can. We can have a better internet than this. We can strike a better deal with Big Tech over how we run our lives. But the surest way to lose is to stop running.

And next time a dead-eyed Amazon driver turns up at your door, be nice, they are probably having a worse day than you are.

In 2017, I was rather excitedly reporting about ideas which were new to me at the time regarding how technology or, as Richard and Daniel Susskind referred to it in The Future of the Professions, “increasingly capable machines” were going to affect professional work. I concluded that piece as follows:

The actuarial profession and the higher education sector therefore need each other. We need to develop actuaries of the future coming into your firms to have:

  • great team working skills
  • highly developed presentation skills, both in writing and in speech
  • strong IT skills
  • clarity about why they are there and the desire to use their skills to solve problems

All within a system which is possible to regulate in a meaningful way. Developing such people for the actuarial profession will need to be a priority in the next few years.

While all of those things are clearly still needed, it is becoming increasingly clear to me now that they will not be enough to secure a job as industry leaders double down.

Source: https://www.ft.com/content/99b6acb7-a079-4f57-a7bd-8317c1fbb728

And perhaps even worse than the threat of not getting a job immediately following graduation is the threat of becoming a reverse-centaur. As Cory Doctorow explains the term:

A centaur is a human being who is assisted by a machine that does some onerous task (like transcribing 40 hours of podcasts). A reverse-centaur is a machine that is assisted by a human being, who is expected to work at the machine’s pace.

We have known about reverse-centaurs since at least Charlie Chaplin’s Modern Times in 1936.

By Charlie Chaplin – YouTube, Public Domain, https://commons.wikimedia.org/w/index.php?curid=68516472

Think Amazon driver or worker in a fulfillment centre, sure, but now also think of highly competitive and well-paid but still ultimately human-in-the-loop kinds of roles being responsible for AI systems designed to produce output where errors are hard to spot and therefore to stop. In the latter role you are the human scapegoat, in the phrasing of Dan Davies, “an accountability sink” or in that of Madeleine Clare Elish, a “moral crumple zone” all rolled into one. This is not where you want to be as an early career professional.

So how to avoid this outcome? Well obviously if you have other options to roles where a reverse-centaur situation is unavoidable you should take them. Questions to ask at interview to identify whether the role is irretrievably reverse-centauresque would be of the following sort:

  1. How big a team would I be working in? (This might not identify a reverse-centaur role on its own: you might be one of a bank of reverse-centaurs all working in parallel and identified “as a team” while in reality having little interaction with each other).
  2. What would a typical day be in the role? This should smoke it out unless the smokescreen they put up obscures it. If you don’t understand the first answer, follow up to get specifics.
  3. Who would I report to? Get to meet them if possible. Establish whether they are technical expert in the field you will be working in. If they aren’t, that means you are!
  4. Speak to someone who has previously held the role if possible. Although bear in mind that, if it is a true reverse-centaur role and their progress to an actual centaur role is contingent on you taking this one, they may not be completely forthcoming about all of the details.

If you have been successful in a highly competitive recruitment process, you may have a little bit of leverage before you sign the contract, so if there are aspects which you think still need clarifying, then that is the time to do so. If you recognise some reverse-centauresque elements from your questioning above, but you think the company may be amenable, then negotiate. Once you are in, you will understand a lot more about the nature of the role of course, but without threatening to leave (which is as damaging to you as an early career professional as it is to them) you may have limited negotiation options at that stage.

In order to do this successfully, self knowledge will be key. It is that point from 2017:

  • clarity about why they are there and the desire to use their skills to solve problems

To that word skills I would now add “capabilities” in the sense used in a wonderful essay on this subject by Carlo Iacono called Teach Judgement, Not Prompts.

You still need the skills. So, for example, if you are going into roles where AI systems are producing code, you need to have sufficiently good coding skills yourself to create a programme to check code written by the AI system. If the AI system is producing communications, your own communication skills need to go beyond producing work that communicates to an audience effectively to the next level where you understand what it is about your own communication that achieves that, what is necessary, what is unnecessary, what gets in the way of effective communication, ie all of the things that the AI system is likely to get wrong. Then you have a template against which to assess the output from an AI system, and for designing better prompts.

However specific skills and tools come and go, so you need to develop something more durable alongside them. Carlo has set out four “capabilities” as follows:

  1. Epistemic rigour, which is being very disciplined about challenging what we actually know in any given situation. You need to be able to spot when AI output is over-confident given the evidence, or when a correlation is presented as causation. What my tutors used to refer to as “hand waving”.
  2. Synthesis is about integrating different perspectives into an overall understanding. Making connections between seemingly unrelated areas is something AI systems are generally less good at than analysis.
  3. Judgement is knowing what to do in a new situation, beyond obvious precedent. You get to develop judgement by making decisions under uncertainty, receiving feedback, and refining your internal models.
  4. Cognitive sovereignty is all about maintaining your independence of thought when considering AI-generated content. Knowing when to accept AI outputs and when not to.

All of these capabilities can be developed with reflective practice, getting feedback and refining your approach. As Carlo says:

These capabilities don’t just help someone work with AI. They make someone worth augmenting in the first place.

In other words, if you can demonstrate these capabilities, companies who themselves are dealing with huge uncertainty about how much value they are getting from their AI systems and what they can safely be used for will find you an attractive and reassuring hire. Then you will be the centaur, using the increasingly capable systems to improve your own and their productivity while remaining in overall control of the process, rather than a reverse-centaur for which none of that is true.

One sure sign that you are straying into reverse-centaur territory is when a disproportionate amount of your time is spent on pattern recognition (eg basing an email/piece of coding/valuation report on an earlier email/piece of coding/valuation report dealing with a similar problem). That approach was always predicated on being able to interact with a more experienced human who understood what was involved in the task at some peer review stage. But it falls apart when there is no human to discuss the earlier piece of work with, because the human no longer works there, or a human didn’t produce the earlier piece of work. The fake it until you make it approach is not going to work in environments like these where you are more likely to fake it until you break it. And pattern recognition is something an AI system will always be able to do much better and faster than you.

Instead, question everything using the capabilities you have developed. If you are going to be put into potentially compromising situations in terms of the responsibilities you are implicitly taking on, the decisions needing to be made and the limitations of the available knowledge and assumptions on which those decisions will need to be based, then this needs to be made explicit, to yourself and the people you are working with. Clarity will help the company which is trying to use these new tools in a responsible way as much as it helps you. Learning is going to be happening for them as much as it is for you here in this new landscape.

And if the company doesn’t want to have these discussions or allow you to hamper the “efficiency” of their processes by trying to regulate them effectively? Then you should leave as soon as you possibly can professionally and certainly before you become their moral crumple zone. No job is worth the loss of your professional reputation at the start of your career – these are the risks companies used to protect their senior people of the future from, and companies that are not doing this are clearly not thinking about the future at all. Which is likely to mean that they won’t have one.

To return to Cory Doctorow:

Science fiction’s superpower isn’t thinking up new technologies – it’s thinking up new social arrangements for technology. What the gadget does is nowhere near as important as who the gadget does it for and who it does it to.

You are going to have to be the generation who works these things out first for these new AI tools. And you will be reshaping the industrial landscape for future generations by doing so.

And the job of the university and further education sectors will increasingly be to equip you with both the skills and the capabilities to manage this process, whatever your course title.

Source: https://pluspng.com/img-png/mixed-economy-png–901.png

Just type “mixed economy graphic” into Google and you will get a lot of diagrams like this one – note that they normally have to pick out the United States for special mention. Notice the big gap between those countries – North Korea, Cuba, China and Russia – and us. It is a political statement masquerading as an economic one.

This same line is used to describe our political options. The Political Compass added an authoritarian/libertarian axis in their 2024 election manifesto analysis but the line from left to right (described as the economic scale) is still there:

Source: https://www.politicalcompass.org/uk2024

So here we are on our political and economic spectrum, where tiny movements between the very clustered Reform, Conservative, Labour and Liberal Democrat positions fill our newspapers and social media comment. The Greens and, presumably if it ever gets off the ground, Your Party are seen as so far away from the cluster that they often get left out of our political discourse. It is an incredibly narrow perspective and we wonder why we are stuck on so many major societal problems.

This is where we have ended up following the “slow singularity” of the Industrial Revolution I talked about in my last post. Our politics coalesced into one gymnasts’ beam, supported by the hastily constructed Late Modern English fashioned for this purpose in the 1800s, along which we have all been dancing ever since, between the market information processors at the “right” end and the bureacratic information processors at the “left” end.

So what does it mean for this arrangement if we suddenly introduce another axis of information processing, ie the large language AI models. I am imagining something like this:

What will this mean for how countries see their economic organisation? What will it mean for our politics?

In 1884, the English theologian, Anglican priest and schoolmaster Edwin Abbott Abbott published a satirical science fiction novella called Flatland: A Romance of Many Dimensions. Abbott’s satire was about the rigidity of Victorian society, depicted as a two-dimensional world inhabited by geometric figures: women are line segments, while men are polygons with various numbers of sides. We are told the story from the viewpoint of a square, which denotes a gentleman or professional. In this world three-dimensional shapes are clearly incomprehensible, with every attempt to introduce new ideas from this extra dimension considered dangerous. Flatland is not prepared to receive “revelations from another world”, as it describes anything existing in the third dimension, which is invisible to them.

The book was not particularly well received and fell into obscurity until it was embraced by mathematicians and physicists in the early 20th century as the concept of spacetime was being developed by Poincaré, Einstein and Minkowski amongst others. And what now looks like a prophetic analysis of the limitations of the gymnasts’ beam economic and political model of the slow singularity has continued to not catch on at all.

However, much as with Brewster’s Millions, the incidence of film adaptations of Flatland give some indication of when it has come back as an idea to some extent. This tells us that it wasn’t until 1965 until someone thought it was a good idea to make a movie of Flatland and then noone else attempted it until an Italian stop-motion film in 1982. There were then two attempts in 2007, which I can’t help but think of as a comment on the developing financial crisis at the time, and a sequel based on Bolland : een roman van gekromde ruimten en uitdijend heelal (which translates as: Sphereland: A Fantasy About Curved Spaces and an Expanding Universe), a 1957 sequel to Flatland in Dutch (which didn’t get translated into English until 1965 when the first animated film came out) by Dionys Burger, in 2012.

So here we are, with a new approach to processing information and language to sit alongside the established processors of the last 200 years or more. Will it perhaps finally be time to abandon Flatland? And if we do, will it solve any of our problems or just create new ones?

I have just been reading Adrian Tchaikovsky’s Service Model. I am sure I will think about it often for years to come.

Imagine a world where “Everything was piles. Piles of bricks and shattered lumps of concrete and twisted rods of rebar. Enough fine-ground fragments of glass to make a whole razory beach. Shards of fragmented plastic like tiny blunted knives. A pall of ashen dust. And, to this very throne of entropy, someone had brought more junk.”

This is Earth outside a few remaining enclaves. And all served by robots, millions of robots.

Robots: like our protagonist (although he would firmly resist such a designation) Uncharles, who has been programmed to be a valet, or gentleman’s gentlerobot; or librarians tasked with preserving as much data from destruction or unauthorised editing as possible; or robots preventing truancy from the Conservation Farm Project where some of the few remaining humans are conscripted to reenact human life before robots; or the fix-it robots; or the warrior robots prosecuting endless wars.

Uncharles, after slitting the throat of his human master for no reason that he can discern, travels this landscape with his hard-to-define-and-impossible to-shut-up companion The Wonk, who is very good at getting into places but often not so good at extracting herself. Until they finally arrive in God’s waiting room and take a number.

Along the way The Wonk attempts to get Uncharles to accept that he has been infected with a Protagonist Virus, which has given Uncharles free will. And Uncharles finds his prognosis routines increasingly unhelpful to him as he struggles to square the world he is perambulating with the internal model of it he carries inside him.

The questions that bounce back between our two unauthorised heroes are many and various, but revolve around:

  1. Is there meaning beyond completing your task list or fulfilling the function for which you were programmed?
  2. What is the purpose of a gentleman’s gentlerobot when there are no gentlemen left?
  3. Is the appearance of emotion in some of Uncharles’ actions and communications really just an increasingly desperate attempt to reduce inefficient levels of processing time? Or is the Protagonist Virus an actual thing?

Ultimately the question is: what is it all for? And when they finally arrive in front of God, the question is thrown back at us, the pile of dead humans rotting across the landscape of all our trash.

This got me thinking about a few things in a different way. One of these was AI.

Suppose AI is half as useful as OpenAI and others are telling us it will be. Suppose that we can do all of these tasks in less than half the time. How is all of that extra time going to be distributed? In 1930 Keynes speculated that his grandchildren would only need to work a 15 hour week. And all of the productivity improvements he assumed in doing so have happened. Yes still full-time work remains the aspiration.

There certainly seems to have been a change of attitude from around 1980 onwards, with those who could choose choosing to work longer, for various reasons which economists are still arguing about, and therefore the hours lost were from those who couldn’t choose, as The Resolution Foundation have pointed out. Unfortunately neither their pay, nor their quality of work, have increased sufficiently for those hours to meet their needs.

So, rather than asking where the hours have gone, it probably makes more sense to ask where the money has gone. And I think we all know the answer to that one.

When Uncharles and The Wonk finally get in to see God, God gives an example of a seat designed to stop vagrants sleeping on it as the indication it needed of the kind of society humans wanted. One where the rich wanted not to have to see or think about the poor. Replacing all human contact with eternally indefatigable and keen-to-serve robots was the world that resulted.

Look at us clever humans, constantly dreaming of ways to increase our efficiency, remove inefficient human interaction, or indeed any interaction which cannot be predicted in advance. Uncharles’ seemingly emotional responses, when he rises above the sea of task-queue-clutching robots all around him, are to what he sees as inefficiency. But what should be the goal? Increasing GDP can’t be it, that is just another means. We are currently working extremely hard and using a huge proportion of news and political affairs airtime and focus on turning the English Channel into the seaborne equivalent of the seat where vagrants and/or migrants cannot rest.

So what should be the goal? Because the reason Service Model will stay with me for some time to come is that it shows us what happens if we don’t have one. The means take over. It seems appropriate to leave the last word to a robot.

“Justice is a human-made thing that means what humans wish it to mean and does not exist at all if humans do not make it,” Uncharles says at one point. “I suggest that ‘kind and ordered’ is a better goal.”

In a previous post, I mentioned the “diamond model” that accountancy firms are reportedly starting to talk about. The impact so far looks pretty devastating for graduates seeking work:

And then by industry:

Meanwhile, Microsoft have recently produced a report into the occupational implications of generative AI and their top 40 vulnerable roles looks like this (look at where data scientist, mathematician and management analyst sit – all noticeably more replaceable by AI than model which caused all the headlines when Vogue did it last week):

So this looks like a process well underway rather than a theoretical one for the future. But I want to imagine a few years ahead. Imagine that this process has continued to gut what we now regard as entry level jobs and that the warning of Dario Amodei, CEO of AI company Anthropic, that half of “administrative, managerial and tech jobs for people under 30” could be gone in 5 years, has come to pass. What then?

Well this is where it gets interesting (for some excellent speculative fiction about this, the short story Human Resources and novel Service Model by Adrian Tchaikovsky will certainly give you something to think about), because there will still be a much smaller number of jobs in these roles. They will be very competitive. Perhaps we will see FBI kind of recruitment processes becoming more common for the rarified few, probably administered by the increasingly capable systems I discuss below. They will be paid a lot more. However, as Cory Doctorow describes here, the misery of being the human in the loop for an AI system designed to produce output where errors are hard to spot and therefore to stop (Doctorow calls them, “reverse centaurs”, ie humans have become the horse part) includes being the ready made scapegoat (or “moral crumple zone” or “accountability sink“) for when they are inevitably used to overreach what they are programmed for and produce something terrible. The AI system is no longer working for you as some “second brain”. You are working for it, but no company is going to blame the very expensive AI system that they have invested in when there is a convenient and easily-replaceable (remember how hard these jobs will be to get) human candidate to take the fall. And it will be assumed that people will still do these jobs, reasoning that it is the only route to highly paid and more secure jobs later, or that they will be able to retire at 40, as the aspiring Masters of the Universe (the phrase coined by Tom Wolfe in The Bonfire of the Vanities) in the City of London have been telling themselves since the 1980s, only this time surrounded by robot valets no doubt.

But a model where all the gains go to people from one, older, generation at the expense of another, younger, generation depends on there being reasonable future prospects for that younger generation or some other means of coercing them.

In their book, The Future of the Professions, Daniel and Richard Susskind talk about the grand bargain. It is a form of contract, but, as they admit:

The grand bargain has never formally been reduced to writing and signed, its terms have never been unambiguously and exhaustively articulated, and noone has actually consented expressly to the full set of rights and obligations that it seems to lay down.

Atul Gawande memorably expressed the grand bargain for the medical profession (in Better) as follows:

The public has granted us extraordinary and exclusive dispensation to administer drugs to people, even to the point of unconsciousness, to cut them open, to do what would otherwise be considered assault, because we do so on their behalf – to save their lives and provide them comfort.

The Susskinds questioned (in 2015) whether this grand bargain could survive a future of “increasingly capable systems” and suggested a future when all 7 of the following models were in use:

  1. The traditional model, ie the grand bargain as it works now. Human professionals providing their services face-to-face on a time-cost basis.
  2. The networked experts model. Specialists work together via online networks. BetterDoctor would be an example of this.
  3. The para-professional model. The para-professional has had less training than the traditional professional but is equipped by their training and support systems to deliver work independently within agreed limits. The medical profession’s battle with this model has recently given rise to the Leng Review.
  4. The knowledge engineering model. A system is made available to users, including a database of specialist knowledge and the modelling of specialist expertise based on experience in a form that makes it accessible to users. Think tax return preparation software or medical self-diagnosis online tools.
  5. The communities of experience model, eg Wikipedia.
  6. The embedded knowledge model. Practical expertise built into systems or physical objects, eg intelligent buildings which have sensors and systems that test and regulate the internal environment of a building.
  7. The machine-generated model. Here practical expertise is originated by machines rather than by people. This book was written in 2015 so the authors did not know about large language models then, but these would be an obvious example.

What all of these alternative models had in common of course was the potential to no longer need the future traditional model professional.

There is another contract which has never been written down: that between the young and the old in society. Companies are jumping the gun on how the grand bargain is likely to be re-framed and adopting systems before all of the evidence is in. As Doctorow said in March (ostensibly about Musk’s DOGE when it was in full firing mode):

AI can’t do your job, but an AI salesman (Elon Musk) can convince your boss (the USA) to fire you and replace you (a federal worker) with a chatbot that can’t do your job

What strikes me is that the boss in question is generally at least 55. As one consultancy has noted:

Notably, the youngest Baby Boomers turned 60 in 2024—the average age of senior leadership in the UK, particularly for non-executive directors. Executive board directors tend to be slightly younger, averaging around 55.

Assume there was some kind of written contract between young and old that gave the older generation the responsibility to be custodian of all of the benefits of living in a civilised society while they were in positions of power so that life was at least as good for the younger generation when they succeeded them.

Every time a Baby Boomer argues that the state pension age increases because “we” cannot afford it, he or she is arguing both for the worker who will then be paying for his or her pension to continue to do so and that they should accept a delay in when they will get their quid pro quo, with no risk that the changes will be applied to the Boomer as all changes are flagged many years in advance. That contract would clearly be in breach. Every Boomer graduate from more than 35 years ago who argues for the cost of student loans to increase when they never paid for theirs would break such a contract. Every Boomer homeowner who argues against any measure which might moderate the house price inflation which they benefit from in increased equity would break such a contract. And of course any such contract worth its name would require strenuous efforts to limit climate change.

And a Boomer who removes a graduate job to temporarily support their share price (so-called rightsizing) in favour of a necessarily not-yet-fully-tested (by which I mean more than testing the software but also all of the complicated network of relationships required to make any business operate successfully) system then the impact of that temporary inflation of the share price on executive bonuses is being valued much more highly than both the future of the business and of the generation that will be needed to run it.

This is not embracing the future so much as selling a futures contract before setting fire to the actual future. And that is not a contract so much as an abusive relationship between the generations.

This is the 200th post from this blog, so I want to talk about The Future.

The Planetary Solvency Dashboard https://global-tipping-points.org/risk-dashboard/

No. Not that future. Scary though it is.

I want to talk about The Future by Naomi Alderman. I read it last year, after wandering around the Hay Festival bookshop moaning that they don’t do science fiction and then coming across Naomi’s book and realising I had just missed her being interviewed. Then I watched the interview and bought both The Future and The Power (which I will talk about at some future date, but which is equally terrific).

The book is about Lenk Sketlish, CEO of the Fantail social network, Zimri Nommik, CEO of the logistics and purchasing giant Anvil, Ellen Bywater, CEO of Medlar Technologies, the world’ most profitable personal computing company, and the people working for them, and the people linked with those people. Zimri, Ellen and Lenk are at least as monstrous as Jeff, Sundar, Elon, Tim and Mark. And they are all preparing for the end of the world.

(If you need to remind yourself what Elon, Jeff, Mark and Sundar all look like milling around, below is a link to Trump’s inauguration:

https://apnews.com/video/jeff-bezos-district-of-columbia-elon-musk-inaugurations-united-states-government-486ab2a989e94aaa8c9afec15bebeb51)

Anvil is set up with alerts for signs of the end of the world being reported anywhere: giant hailstones, plague of locusts, Mpox, rain of blood which turned out to be a protest for menstrual equity involving blood-soaked tampons being thrown at Lenk and co as they emerged from a courthouse in Washington. The information Zimri, Ellen and Lenk have on everybody else in the world makes them feel all seeing, all hearing, all knowing. Combined with riches unknown to anyone before in history it makes them feel invulnerable, even to the end of the world, even to each other. Which turns out, of course, to be their decisive vulnerability.

It takes in survivalism, religious cults and wraps it all up in a thriller plot which is absolutely the kind of science fiction you want to be reading now instead of listening out for the latest antics of the horse in the hospital. And it was all written over a year before Elon even started with DOGE. The Future by Naomi Alderman is a fantastic read, particularly if you would like to see someone like Musk get an appropriate end to his story. I obviously won’t spoil it by saying what that is, but I don’t think I would be giving anything away by saying rockets are involved!