
A few months ago I decided to read Mary Shelley’s Frankenstein for the first time. I also watched Guillermo del Toro’s Frankenstein, on a big screen, despite, according to The New Yorker, it having been “Netflixed down to size”.
Shelley’s book is largely monologues of interior thoughts of Frankenstein and his creation, with wildly careering emotions and death, death, death everywhere – perhaps unsurprising from an author whose mother died 10 days after giving birth to her, who lost a child and whose half sister died by suicide while she was working on Frankenstein, with much more tragedy to follow after its publication. There is a word Mary Shelley uses more than I have read in any other book: variants of sympathy/sympathise/sympathies turn up 32 times. Because of course one of the many things the book is all about is mutual incomprehension of the creator and the created.
Last week I was in Bournemouth as a last minute substitute for Lanzarote, something I may come back to at a later date, and I stumbled across the churchyard of St Peter’s Church in which Mary Shelley was buried, along with the cremated heart of her husband Percy Shelley, at the age of 53. There is also a pub in Bournemouth named after her (above) but whose sign depicts the monster from her most famous piece of writing.
As we enter another time of mutual incomprehension of the creator and the created, I have been reading the surprisingly-difficult-to-access paper by Kyle Kingsbury (the systems engineer, not the MMA guy) called The Future of Everything is Lies, I Guess. I will put a link to an X account which shared it here, as going to the aphyr.com site to read it seems to generate this message:

Once you can read it though, it starts to sketch out a likeness of our current monster and chip away a little at the human side of the mutual incomprehension. I am talking, of course, about what people are currently calling “AI”, which Kingsbury defines as:
…a family of sophisticated Machine Learning (ML) technologies capable of recognizing, transforming, and generating large vectors of tokens: strings of text, images, audio, video, etc. A model is a giant pile of linear algebra which acts on these vectors. Large Language Models, or LLMs, operate on natural language: they work by predicting statistically likely completions of an input string, much like a phone auto-complete. Other models are devoted to processing audio, video, or still images, or link multiple kinds of models together.
The article sets out how this is a technology where nobody really understands why it has been successful or how to make it better, which falls into strange loops or attractors, has odd gaps in its capabilities and is highly sensitive to slight changes in its formatting. It is a technology which is simultaneously highly capable and an idiot. And Kingsbury worries that our culture is not ready for such a technology. As he says:
As LLMs etc are deployed in new situations, and at new scale, there will be all kinds of changes in work, politics, art, sex, communication and economics. Some of these effects will be good. Many will be bad. In general, ML promises to be profoundly weird.
Buckle up.
He continues:
Most people seem concerned with conscious, motivated threats: AIs could realize they are better off without people and kill us. I am concerned that ML systems could ruin our lives without realizing anything at all.
There follow extensive examples of the problems the various ML applications are already starting to cause and some speculation about where things may be going in various areas of our lives before we get to the chapter on work. And the subject of hiring “AI employees”. This is probably my favourite bit:
Imagine a co-worker who generated reams of code with security hazards, forcing you to review every line with a fine-toothed comb. One who enthusiastically agreed with your suggestions, then did the exact opposite. A colleague who sabotaged your work, deleted your home directory, and then issued a detailed, polite apology for it. One who promised over and over again that they had delivered key objectives when they had, in fact, done nothing useful. An intern who cheerfully agreed to run the tests before committing, then kept committing failing garbage anyway. A senior engineer who quietly deleted the test suite, then happily reported that all tests passed.
You would fire these people, right?
Kingsbury sees the two extremes of the possible range of outcomes as:
- ML systems continue to hallucinate, cannot be made reliable, and ultimately fail to deliver on the promise of transformative, broadly-useful “intelligence”. Or they work, but people get fed up and declare “AI Bad”…a lot of ML people lose their jobs, defaults cascade through the financial system, but the labor market eventually adapts and we muddle through. ML turns out to be a normal technology.
- In the other extreme, OpenAI delivers on Sam Altman’s 2025 claims of PhD-level intelligence, and the companies writing all their code with Claude achieve phenomenal success with a fraction of the software engineers. ML massively amplified the capabilities of doctors, musicians , civil engineers, fashion designers, managers, accountants, etc, who briefly enjoy nice paychecks before discovering that demand for their service is not as elastic as once thought, especially once their clients lose their jobs or turn to ML to cut costs. Knowledge workers are laid off en masse and MBAs start taking jobs at McDonalds or driving for Lyft, at least until Waymo puts an end to human drivers. This is inconvenient for everyone: the MBAs, the people who used to work at McDonalds and are now competing with MBAs, and of course bankers, who were rather counting on the MBAs to keep paying their mortgages. The drop in consumer spending cascades through industries. A lot of people lose their savings, or even their homes. Hopefully the trades squeak through. Maybe the Jevons paradox kicks in eventually and we find new occupations.
In the following chapter Kingsbury speculates on what some of those new occupations might be:
- Incanters. People who can prompt LLMs into getting what is wanted.
- Process Engineers. People who help catch LLM errors. They build quality control processes – training people, identifying where more intense review is needed, assessing the cost-benefit trade offs of automating tasks, etc
- Statistical Engineers. People who try and measure, model and control variability in ML systems.
- Model Trainers. This will become increasingly difficult as the amount of false content or “slop” increases across the internet.
- Meat Shields. People who are accountable for the errors of the ML systems they supervise.
- Haruspices. People responsible for going through the model inputs, outputs and internal states of a ML system which has done something terrible to try and give a plausible reason for its behaviour.
But ultimately Kingsbury concludes that we should just stop using these systems. To return to the original analogy, the monster cannot be understood. There is often nothing actually there to understand. And it is certainly not in the business of understanding you. Although it may be very very good at convincing you otherwise.
On balance I think my view is currently at the muddle-through-with-ML-as-a-normal-technology end, which still looks likely to cause a disruption considerably bigger than 2008. My main reason is the already collapsing trust in many of the Big Tech companies. Trust which is going to be required even if their technology really can do some of this stuff. It is the scenario where we all get fed up and declare “AI Bad”. Like when we read about the people running Meta showing nowhere near the social responsibility commensurate with their current level of market power.
Or when, as last week, we have days and days of breathless commentary about Anthropic’s Mythos and Project Glasswing, and how its immense capabilities caused the company not to release it, sparking a meeting of central bankers to discuss the threat such technologies posed to financial systems. Only to finally read an account of attempts to verify any of what Anthropic have been saying. It is quite a technical piece, which I by no means understand all of, but the final paragraph is fairly arresting:
The most important thing in the Mythos release is not the model. It is the precedent. Anthropic has established, without discussion and without pushback, that a private company can unilaterally classify a capability as too dangerous for the public, grant selective access to the largest incumbents in the affected industry, and construct a parallel disclosure regime outside any democratic accountability structure. That precedent is exclusivity for abuse. It will be used by companies with worse judgment than Anthropic and narrower definitions of “partner” than the Glasswing consortium. The time to object to the shape of this thing is while it is still being built, not after it has removed all transparency and accountability.
How might Claude or ChatGPT respond to being designated “AI Bad”? Well Mary Shelley’s monster put it this way:
Once I falsely hoped to meet with beings who, pardoning my outward form, would love me for the excellent qualities which I was capable of unfolding. I was nourished with high thoughts of honour and devotion. But now crime has degraded me beneath the meanest animal. No guilt, no mischief, no malignity, no misery, can be found comparable to mine. When I run over the frightful catalogue of my sins, I cannot believe that I am the same creature whose thoughts were once filled with sublime and transcendent visions of the beauty and the majesty of goodness. But it is even so; the fallen angel becomes a malignant devil. Yet even that enemy of God and man had friends and associates in his desolation; I am alone.



































