The Actuary magazine recently had a debate about whether the underlying data or the story you wove around it was more important. I’m not sure there is always a clear distinction between the two, as Dan Davies rather neatly illustrates here, but my view is that, if a binary choice has to be made, it is always going to be the story. And there was a great example of this which popped up recently in the FT.

The FT article was ‘Is university still worth it?’ is the wrong question, by John Burn-Murdoch, with great graphs as usual by John. However, as is sometimes the case, I feel that a very different and more convincing story could be wrapped around the same datasets he is showing us.

The article’s thesis is as follows:

The graduate earnings premium, ie how much more on average graduates earn than non-graduates, has only fallen in the UK as the proportion going to university has risen. It has risen in other countries:

In the UK, we have had much weaker productivity growth than the other comparator countries, and also “the steady ramping up of the minimum wage has squeezed the earnings premium from the lower end too”:

We have also had a much smaller increase in the percentage of managerial and professional jobs than a different group of comparator countries (they haven’t mentioned Germany before), meaning graduates are forced to take lower salaried jobs elsewhere:

So the answer according to the FT? We should focus on economic growth rather than “tweaking” higher education intake and funding. Then graduate earnings would be higher, student loans could be more generous(!) and students would have more chance of getting a good job.

Well perhaps. But here’s a different framing of the same data that I find more persuasive.

Let’s start by addressing that point about the minimum wage. According to the House of Commons Library report on this, the UK’s minimum wage is broadly comparable to that of France and the Netherlands, although higher than Canada’s and much higher than that of the United States. The employers who are the FT’s constituency would obviously like us lower down this particular chart:

The main economic framing here is the progress myth of the UK’s business community: economic growth. All problems can be solved if we can just get more economic growth. Apparently we need more inequality in pay between graduates and non-graduates which we can get by generating more economic growth. This is honest of them at least, although I don’t see much evidence that the economic growth they crave will go into skilled job creation rather than stock buy backs (according to Motley Fool, “Companies spent $249 billion on stock buybacks in Q3 2025, and $777 billion over the first three quarters of 2025.”).

There are a lot of problems with framing every economic question with respect to economic growth, memorably illustrated by Zack Polanski of the Green Party in this less than 3 minute video recently (I strongly recommend you watch it before you read on – click on the read in browser link if you can’t see it):

Economic growth is increasingly without purpose, wasteful of energy and poorly distributed. It is chasing outputs, literally any outputs, whatever the cost to the environment, our health system, our education system, our social support systems and our communities. Looking at the framing above, you can see that economic growth as currently pursued will always see anything which stops the concentration of wealth amongst the already wealthy, like a higher national minimum wage or a totally made-up concept like a lower graduate earnings premium (which in itself is a framing trying to make reducing inequality seem undesirable) as a problem. Lack of productivity growth, itself a proxy for this kind of economic growth (because if you ask why we need more productivity the answer is always to get more economic growth), is usually directed as a criticism at “lazy” UK workers, rather than under-investing and over-extracting UK business owners.

But what if, instead of economic growth, your progress myth was reducing inequality? Or growing equality within the economy?

Source: World Inequality Database wid.world

If you focused on inequality rather than economic growth, then you would find it correlates with everything we say we don’t want. Unlike economic growth, having equality as an aim actually has the advantage of having an evidence base for the claim that it improves society:

Source: https://media.equality-trust.out.re/uploads/2024/07/The-Spirit-Level-at-15-2024-FINAL.pdf

If you focused on inequality, then you would be pleased that we have had an increase in our minimum wage. You would think that the same FT article’s admission that UK graduates’ skills levels are higher than those in the United States was more important than something called a graduate earnings premium.

Burn-Murdoch is right to say asking whether university is worth it is the wrong question.

However economic growth is the wrong answer.

And I thought I would probably be stopping there for this week. But then something odd happened. A “Thought Exercise” set in June 2028 “detailing the progression and fallout of the Global Intelligence Crisis” (ie science fiction), published on 23 February, may have tanked the share price of IBM later that day. The fall definitely happened, with IBM’s share price falling 13%, its biggest fall since 2000, alongside smaller falls in other tech stocks.

Source: https://markets.ft.com/data/equities/tearsheet/summary?s=IBM:NYQ

According to the FT:

Investors have recently seized on social media rumours and incremental developments by small AI companies to justify further selling, with a widely circulated blog post by Citrini Research over the weekend describing how AI could hypothetically push the US unemployment rate above 10 per cent by 2028, proving the latest catalyst.

The likelihood of the scenario portrayed is difficult to assess, but the speed with which the total economic collapse happens subsequently as described feels unlikely if not impossible. However the fact that the markets are this jittery tells us something I think. As Carlo Iacono puts it:

We are living through a period in which the gap between “plausible narrative” and “tradeable signal” has collapsed to nearly nothing. When a scenario feels real enough to model, and the underlying anxiety is already there waiting to be organised, fiction and forecast become functionally indistinguishable.

The data underlying the markets hasn’t changed, but the story has. I rest my case.

Het Scheepvaartmuseum, Amsterdam, in the fog. Another museum which is well worth a visit

To be read to the accompaniment of Lindisfarne singing Fog on the Tyne, or possibly Kate Bush singing The Fog.

Reporting on AI is all over the place, in both meanings of that phrase. Some think it is very dangerous but that the people working on it should be trusted to police it themselves. Some are retreating from prediction but are instead trying to draw a coastline “knowing the interior is mostly fog”. Some are playing war games in the Arctic with different LLMs. But everyone seems fairly confident they have a hot take. I wonder.

The book I finished this weekend had a passage about a first experiment with a new substance which could shield against gravity. Mr Cavor, the rather unworldly scientist, is explaining to Mr Bedford, a man with no obvious talents other than to look for a quick buck where he can find one, what would have happened if his substance, Cavorite, had not got dislodged fairly quickly from where they had positioned it:

“You perceive,” he said, “it formed a sort of atmospheric fountain, a kind of chimney in the atmosphere. And if the Cavorite itself hadn’t been loose and so got sucked up the chimney, does it occur to you what
would have happened?”

I thought. “I suppose,” I said, “the air would be rushing up and up over that infernal piece of stuff now.”

“Precisely,” he said. “A huge fountain—”

“Spouting into space! Good heavens! Why, it would have squirted all the atmosphere of the earth away! It would have robbed the world of air! It would have been the death of all mankind! That little lump of stuff!”

“Not exactly into space,” said Cavor, “but as bad—practically. It would have whipped the air off the world as one peels a banana, and flung it thousands of miles. It would have dropped back again, of course—but on an asphyxiated world! From our point of view very little better than if it never came back!”

I stared. As yet I was too amazed to realise how all my expectations had been upset. “What do you mean to do now?” I asked.

“In the first place if I may borrow a garden trowel I will remove some of this earth with which I am encased, and then if I may avail myself of your domestic conveniences I will have a bath. This done, we will converse more at leisure. It will be wise, I think”—he laid a muddy hand on my arm—“if nothing were said of this affair beyond ourselves. I know I have caused great damage—probably even dwelling-houses may be ruined here and there upon the country-side. But on the other hand, I cannot possibly pay for the damage I have done, and if the real cause of this is published, it will lead only to heartburning and the obstruction of my work. One cannot foresee everything, you know, and I cannot consent for one moment to add the burden of practical considerations to my theorising…”

The extract is, of course, from HG Wells’ classic The First Men in the Moon, published in 1901.

In case you are in any doubt, Dario Amodei is our Mr Cavor here. I can just imagine his response to the first disaster attributed to AI research being prefaced by “one cannot foresee everything, you know…”. And there are too many Mr Bedfords out there to shake a stick at, trying to sell you anything they can possibly attribute to AI just to keep the whole thing rolling along.

I am with the fog people. The FT seem to be too, with this pair of diagrams attached to this article.

First the US, where there are tentative signs of something they can possibly use as a proxy for productivity growth as a result of using AI:

Source: https://www.ft.com/content/d6fdc04f-85cf-4358-a686-298c3de0e25b

And this one for the UK, where there aren’t:

And so it was this foggy sensibility about AI which I took with me to the Bletchley Park Museum last weekend, site of the AI Safety Summit in November 2023 which drew in the US Vice President, Kamala Harris, European Commission President Ursula von der Leyen, Elon Musk, then UK Prime Minister Rishi Sunak, Open AI’s Sam Altman, Meta’s Nick Clegg and Prof Yann LeCun, Meta’s chief AI scientist, amongst around 100 guests invited to suck their teeth about AI.

The thing that particularly struck me at Bletchley Park is that it demystified the emergence of the computer for me. The forerunner, which was the mechanisation using punch cards of the process of sorting the massive amounts of data the centre was receiving in war time, smacks of a group of people who had just run out of wall to spread their webs of cards and strings across. It was a crime investigation which had got out of hand.

A highlight for me was Alan Turing’s very prescient little note about AI, written in 1940 but anticipating the arguments which would be raging by 2026 (and how poignant that the man who probably did more than anyone to transform what we are able to do by punching a keyboard was chained to one that could only press hunks of metal against a strip of carbon onto a piece of paper):

There is also a hilarious secrecy pledge from the ancestors of the safety summit people, telling you all the ways in which you just need to shut up:

“There is an English proverb none the worse for being seven centuries old:” it thunders.

Wicked tongue breaketh bone,

Though the tongue itself hath none.

Words to live by, I’m sure we’d all agree.

What Bletchley Park was less good at was explaining how the Enigma code was cracked, despite an excellent collection of the hardware involved. For that, I recommend Simon Singh’s The Code Book.

Here was the world’s first “intelligence factory”, scaling up intelligence gathering and analysis as never before and by so doing also changing the way governments would interact with their populations, with just as many implications for our current times as the development of AI. This cluster of huts around a country house rebranded as GCHQ and moved to Cheltenham a few years after World War 2.

Path dependence is a term which describes a situation where past events or decisions constrain later events or decisions. Bletchley Park feels like the Museum of Path Dependence to me.

And the legacy of the safety summit? Well my “hot take” would be: when you are a little lost in the fog, it is generally advisable to slow down a bit and take steps to reduce your risk of breaking things. I wonder if I can get that on a bumper sticker.

The disappointing brandy scene from Goldfinger (1964) https://youtu.be/I6COBucJQfE?si=saiV5f80ISSB3FGY

Politics is a bit depressing this week, so I thought instead I would focus on the asymmetry of our attitudes towards different high octane liquids.

I remember when I first got interested in wine. It was the early noughties and I was out at a restaurant in Cardiff called Le Cassoulet (no longer trading under that name I understand) with my then boss who liked to hit his expense account pretty hard from time to time. The sommelier seemed to know him quite well and scurried off to get him some particularly old claret to accompany the meal. I think it was from 1972 or thereabouts. I remember noting that it had a different colour (brown) from the red wine I was used to drinking and, when sipped, there were a lot of different flavours and smells competing for my attention. Something which I later heard described as “complexity”. From then on I realised that wine drinking could involve something a bit more than just something nice in a glass to accompany a meal.

The journey of alcoholic drinks from drinks to luxury consumer items and assets is nicely illustrated by the Bond franchise. There are a number of movies we could choose but let’s go for Goldfinger, shall we?

In the disappointing brandy scene from Goldfinger, we have this exchange between M, Bond and the Governor of the Bank of England, Colonel Smithers:

Smithers: “Have a little more of this rather disappointing brandy.”

M: “What’s the matter with it?”

Bond: “I’d say it was a 30-year-old Fine indifferently blended, sir…with an overdose of Bons Bois.”

M: “Colonel Smithers is giving the lecture, 007.”

Now first of all, that is clearly not what the Governor of the Bank of England looks like. As readers of this blog already know, he looks like this:

That scene is also notable for including a brief discussion of how the relative value of gold held at the US and British central banks at the time was used “to establish respectively the true value of the dollar and the pound”. In 1964 this would have been via the London Gold Pool, running between 1961 and 1968, by which a group of eight central banks including the United States Fed and the Bank of England agreed to cooperate in maintaining the Bretton Woods System of fixed-rate convertible currencies and defending the gold price. Ian Fleming’s book, written in 1959, predated this arrangement, but the anxieties about the gold market which led to its creation would have been very much around. So we still have the Governor (meeting Bond alone rather than with M) saying (during a lecture which went on for 10 pages):

We can only tell what the true strength of the pound is, and other countries can only tell it, by knowing the amount of valuta we have behind our currency.

Valuta is a rare word, from American English, for the value of one currency in terms of its exchange rate with another, and perhaps an odd one for the Governor of the Bank of England to use. But it is clear that Bond is sent after Goldfinger primarily for economic reasons (finding a way to smuggle large amounts of gold across borders threatens the Bank of England’s cosy little gold club) rather than because (spoiler alert) Goldfinger thinks nothing of murdering people (quite a lot of people in the case of Operation Grand Slam) who get in his way, cheating at golf, employing butlers with lethal bowlers, slicing through things with gold lasers and planting nuclear devices in Fort Knox. Released shortly after Ian Fleming’s death, it was the last Bond movie he saw in production.

It is the same film in which Bond obsesses about getting his favourite champagne (Dom Perignon 1953 – Bond was also someone not afraid to hit his expense account pretty hard from time to time) chilled to 38°F (3.3°C) before he gets bashed on the back of the head and the girl he is with (Goldfinger’s assistant, Jill Masterson, played by Shirley Eaton) gets sprayed from head to toe with gold paint. Perhaps more than any other brand, Bond linked luxury and high octane liquids of various kinds.

Skip forward a few decades and some of it has clearly stopped being something to drink at all, but instead a, very fragile, status asset for the very rich to demonstrate their status to each other. Here are the top prices achieved by wine at auction from one website, 8 of the 10 of them pre-dating both me and Goldfinger:

Source: vinovest https://www.vinovest.co/blog/25-most-expensive-wines-in-the-world-2026

Contrast this with the way we have treated fossil fuels. As Luke Kemp points out in Goliath’s Curse:

We tend to forget that fossil fuels come primarily from long-dead plants and animals. These organisms died between 360 and 286 million years ago during the Carboniferous period, after capturing sunlight through photosynthesis or other means. It is that fossilised energy that we are consuming. According to one estimate, it would take 400 years of global photosynthesis to power the modern world for one year. It takes ninety-eight tons of organic matter buried during the Carboniferous to become just five litres of petrol. We are now a high-energy Goliath, powered by dead matter.

According to a petrol price checker from earlier this week, the garage closest to me currently sells unleaded petrol for £1.29 a litre. So 98 tones of organic matter curated for 300 million years retails for £6.45. That’s less than half the price of a sausage bap and a coffee from Costa via UberEats:

But apparently it’s still not cheap enough.

Most of the content from this article recommending eternal vigilance despite the cheapest prices for 5 years and the claims that “petrol is still 6p too high at the pumps” comes from Howard Cox, founder of FairFuelUK. Whose website includes this picture with a not-too-presumptious-claim-at-all below it:

Even if you weren’t concerned with climate change or the health effects of petrol fumes in the air, this seems like a strange hill for anyone to be dying on. And dying we are. According to the 2025 Global Report of the Lancet Countdown average global heat-related mortality has now reached 546,000 pa, up 63% in just over 20 years:

And that’s just heat. A recent report from the Royal College of Physicians: A Breath of Fresh Air estimated 30,000 deaths from air pollution each year, of which car emissions form an important component.

By the time even the Bond franchise had started worrying about environmental concerns in 2008 with Quantum of Solace, a Somerset Maughamish short story converted into an attempt by a sinister organisation to become the water monopoly in Bolivia through underhand means, the iconic shot of the woman covered in gold had become a female consular employee (Strawberry Fields, played by Gemma Arterton) drowned in oil:

Source; http://007magazine.co.uk/factfiles/factfiles_trivia5.htm

We currently pay between £3.12 and £7.09 per litre in duty on wine, depending on strength, and £0.53 per litre in duty on petrol.

Our attitude to different types of high octane liquids has clearly been nuts in all kinds of ways for a long time. But it is just part of our political frostbite at the moment: we allow our living organisations and institutions to remain frozen in time because we have always done things that way, regardless of the living tissue we are killing in the process. From the endless cycle of public inquiries and ignored recommendations to our use of economics to rationalise things we have already decided to do to batting on with traditional exams: it seems we are just going to do what we are going to do. And freezing fuel duty now looks like it needs to be added to that list.

We can laugh at Trump for accepting an award of “undisputed champion of beautiful clean coal” by the Washington Coal Club and legislating that black is now white by revoking the Environmental Protection Agency’s scientific ruling from 2009 about the harms of climate change. But Trump does at least think he needs a reason to support the fossil fuel industry, even if he needs to make one up. We are just doing it because our politics has gangrene.

In ordering #5, self-driving cars will happily drive you around, but if you tell them to drive to a car dealership, they just lock the doors and politely ask how long humans take to starve to death. Source: https://m.xkcd.com/1613/

To be read to the soundtrack of Bruce Springsteen singing Streets of Minneapolis.

My attention was drawn this week to an article by Dario Amodei, co-founder of Anthropic (a spin off from OpenAI, which was co-founded by Elon Musk and heavily invested in by Microsoft so very much part of the Magnificent 7 architecture), the creator of the large language model Claude, called The Adolescence of Technology. It is hard to overemphasise how much I disagree with everything Dario has written here, but also useful in that it is a long article, which covers a lot of ground, and allows me to define my views in opposition to it.

The irritations start pretty much straight away. So Dario quotes from a science fiction classic (Carl Sagan’s First Contact), but then follows this up under the heading of “Avoid doomerism” with this:

…but it’s my impression that during the peak of worries about AI risk in 2023–2024, some of the least sensible voices rose to the top, often through sensationalistic social media accounts. These voices used off-putting language reminiscent of religion or science fiction, and called for extreme actions without having the evidence that would justify them.

Notice the word “sensible” doing the heavy lifting there. Only science fiction endorsed by Dario will be considered. Dario wants us to consider the risks of AI in “a careful and well-considered manner”, which sounds reasonable, but then his 3rd and final bullet under this (after “avoid doomerism” and “acknowledge uncertainty”) goes as follows:

Intervene as surgically as possible. Addressing the risks of AI will require a mix of voluntary actions taken by companies (and private third-party actors) and actions taken by governments that bind everyone. The voluntary actions—both taking them and encouraging other companies to follow suit—are a no-brainer for me. I firmly believe that government actions will also be required to some extent, but these interventions are different in character because they can potentially destroy economic value or coerce unwilling actors who are skeptical of these risks (and there is some chance they are right!).

So reflexively anti regulation of his own industry, of course. And voluntary actions by corporations, an approach to solving problems which has been demonstrated not to work repeatedly, is apparently “a no-brainer”. Also it is automatically assumed that government actions will destroy value. Only market solutions will be endorsed by Dario, pretty much until they have messed up so badly you are forced to bring governments in:

To be clear, I think there’s a decent chance we eventually reach a point where much more significant action is warranted, but that will depend on stronger evidence of imminent, concrete danger than we have today, as well as enough specificity about the danger to formulate rules that have a chance of addressing it. The most constructive thing we can do today is advocate for limited rules while we learn whether or not there is evidence to support stronger ones.

There is then the expected sales pitch about what he has seen within Anthropic about the relentless “increase in AI’s cognitive capabilities”. And then the man who warned about sensationalist science fiction is off:

I think the best way to get a handle on the risks of AI is to ask the following question: suppose a literal “country of geniuses” were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist.

And the rest of the article is then off solving this imaginary problem in all its facets, rather than the wealth and power concentration problem that we actually have. The only legislation he seems to be in favour of seems to be something called “transparency legislation”, legislation which of course Anthropic would help to write.

However, after suggesting everything from isolating China and using “AI to empower democracies to resist autocracies” to private philanthropy as the solutions to his imagined problems, Dario finally and reluctantly concludes government intervention might after all be necessary as follows:

…ultimately a macroeconomic problem this large will require government intervention. The natural policy response to an enormous economic pie coupled with high inequality (due to a lack of jobs, or poorly paid jobs, for many) is progressive taxation. The tax could be general or could be targeted against AI companies in particular. Obviously tax design is complicated, and there are many ways for it to go wrong. I don’t support poorly designed tax policies. I think the extreme levels of inequality predicted in this essay justify a more robust tax policy on basic moral grounds, but I can also make a pragmatic argument to the world’s billionaires that it’s in their interest to support a good version of it: if they don’t support a good version, they’ll inevitably get a bad version designed by a mob.

That, by the way, is what Dario thinks of democracy: “a bad version designed by a mob” rather than the “good version” that he and his fellow billionaires could come up with in their own self interest. The mask has really slipped by this point. And the following section, on “Economic concentration of power”, just demonstrates that he has no effective answers at all that he deems acceptable on this. It’s just an inevitability for him.

This is what Luke Kemp’s excellent Goliath’s Curse refers to as a “Silicon Goliath”. Goliaths are dominance hierarchies which spread by dominating the areas around them. They need three conditions (which Luke calls “Goliath fuel”): lootable resources (ie resources which can be easily stolen off someone else), caged land (ie land difficult to escape from) and monopolizable weapons (ie ones which require processes which can be developed to give one society an edge over another). We are all Goliath-dwellers in “The West” now, looting resources from other countries in unequal exchanges which impoverish the Global South, with weapons (eg nuclear weapons) available only to the elite few countries and operating within the cages of heavily-policed national boundaries. The Silicon Goliath which is developing will have data as its lootable resource, mass surveillance systems providing its cages and monopolizable weapons such as killer drones. The resultant killbot hellscapes which people like Dario Amodei laughably imagine they have defences against through things like their Claude’s Constitution are almost pitiful in their inadequacy.

Nate Hagens takes Dario’s claims for AI’s cognitive capabilities much more seriously than me, and then considers the risks in a less adolescent way here. As he says:

And here’s what his essay has almost nothing about. Energy, water, materials, or ecological limits.

And also nowhere does Dario talk about the 99% of people who are just spectators in his world, other than to describe them as “the mob”. This is quite a blind spot, as Luke Kemp points out in his exhaustive study of the collapses of “Goliaths” over the last 5,000 years. “The extreme levels of inequality” predicted by Amodei in his essay are not just things we have to put up with, but the reasons the world he predicts is likely to be hugely unstable. Not created by AI, but accelerated by it. Kemp describes it as “diminishing returns on extraction”:

We see a pattern re-emerging across case studies. Societies grow more fragile over time and more prone to collapse. Threats that they had always faced such as invaders, disease and drought seem to take a heavier toll.

As societies grew bigger:

They still faced the underlying (and ongoing) problem of rising inequality creating societies where and institutions more extractive power was more concentrated.

And eventually:

The result is more extractive institutions creating growing instability, internal conflict, a drain of resources away from government, state capture by private elites, and worse decision-making. Society – especially the state – becomes more fragile. Private elites tend to take a larger share of extractive benefits. The state, and many of the power structures it helps prop up, then usually falls apart once a shock hits: for Rome it was climate change, disease, and rebelling Germanic mercenaries; for China it was often floods, droughts, disease and horseback raiders; for the west African kingdoms it was invaders and a loss of trade; for the Maya it was drought and a loss of trade; and for the Bronze Age it was drought, a disruption of trade and an earthquake storm.

The only real answer to combatting existential risks in the hands of adolescents like the Tech Bros is more democracy: over control of decision-making, over control of resources, over control of the threat of violence and over control of information. We are a long way from achieving these within our own particular Goliath at the moment, and indeed there is no sign at all that our elites are interested in achieving them. The Magnificent 7 are propping up the US stock exchange. The promise of perpetual economic growth is the progress myth of our time and leaders who do not provide it will lose the “Mandate of Heaven” in just the same way as Chinese rulers did when they were unable to prevent floods and droughts. Adam Tooze sees the signs of the inner demons of our elites starting to detach them from reality in the latest disclosures from the Epstein files:

Are we, like [Larry] Summers, fantasizing about stabilizing our desires and needs in an inherently dangerous and uncertain world? Are we kidding ourselves?

But, without those controls in place, we would need a lot more than Dario’s Anthropic playing nicely to allow this particular adolescent to grow up. And this is where I am forced to take Nate Hagens’ assessment more seriously. Because if our rulers’ Mandates of Heaven are dependent on eternal economic growth on their watch and they, rightly, think that this is not possible in our current non-AI-enhanced world but, wrongly, think it is possible in a future AI-enhanced world, then that is the way they are going to demand we go. And, if the Larry Summers fantasists really are kidding themselves, it may be very hard to talk them out of it.