I watched The War Game this week, as it had suddenly turned up on iPlayer and I had not seen it before. It was the infamous film from 1966 on the horrors of a nuclear war in the UK that was not televised until 1985. It has been much lauded as both necessarily horrifying and important over the years, but what struck me watching it was how much it looked back to the period of rationing (which had only ended in the UK 12 years earlier) and general war-time organisation from the Second World War. It would be a very different film if made now, probably drawing on our recent experiences of the pandemic (when of course we did dig huge pits for mass burials of the dead and set up vast Nightingale hospitals as potential field hospitals, before the vaccines emerged earlier than expected).

But what about the threat of nuclear war which still preoccupied us so much in the 1980s but which seems to have become much less of a focus more recently? With the New START treaty, which limits the number of strategic nuclear warheads that the United States and Russia can deploy, and the deployment of land and submarine-based missiles and bombers to deliver them, due to expire on 5 February 5, negotiations between Russia and the United States finally appear to be in progress. However China has today confirmed that it does not want to participate in these.

In Mark Lynas’ recent book Six Minutes to Winter, he points to the Barret, Baum and Hostetler paper from 2013 which estimated the probability of inadvertent nuclear war in any year to be around 1%. This is twice the probability of insolvency we think acceptable for our insurance companies under Solvency II and would mean, if accurate, that the probability of avoiding nuclear war by 2100 was 0.99 raised to the power of 75 (the number of years until 2100), or 47%, ie less than a fifty-fifty chance.

That doesn’t seem like good enough odds to me. As Lynas says:

We cannot continue to run the daily risk of nuclear war, because sooner or later one will happen. We expend enormous quantities of effort on climate change, a threat that can endanger human civilisation in decades, but ignore one that can already destroy the world in minutes. Either by accident or by intent, the day of Armageddon will surely dawn. It’s either us or them: our civilisation or the nukes. We cannot both survive indefinitely.

The Treaty on the Prohibition of Nuclear Weapons (TPNW) was adopted at the UN in 2017 and came into force in 2021. In Article 1 of the Treaty, each state party to it undertakes never to develop, test, produce, possess, transfer, use or threaten to use nuclear weapons under any circumstances. 94 countries have signed the TPNW to date, with 73 full parties to it.

The House of Commons library entry on TPNW poses a challenge:

It is the first multilateral, legally binding, instrument for nuclear disarmament to have been negotiated in 20 years. However, the nuclear weapon states have not signed and ratified the new treaty, and as such, are not legally bound by its provisions. The lack of engagement by the nuclear weapon states subsequently raises the question of what this treaty can realistically achieve.

It then goes on to state the position of the UK Government:

The British Government did not participate in the UN talks and will not sign and ratify the new treaty. It believes that the best way to achieve the goal of global nuclear disarmament is through gradual multilateral disarmament, negotiated using a step-by-step approach and within existing international frameworks, specifically the Nuclear Non-Proliferation Treaty. The Government has also made clear that it will not accept any argument that this treaty constitutes a development of customary international law binding on the UK or other non-parties.

There are 9 nuclear states in the world: China, France, India, North Korea, Pakistan, Russia, Israel, the UK and the United States. Israel recently conducted a 12 day war with Iran to stop it becoming the 10th. Many argue that Russia would never have invaded Ukraine had it kept its nuclear weapons (although it seems unlikely that they would have ever been able to use them as a deterrent for a number of reasons). So the claims of these nuclear states that they are essential to their security are real.

But is the risk that continued maintenance of a nuclear arsenal poses worth it for this additional security? For the security only operates at the deterrence level. Once the first bomb lands we are no more secure than anyone else.

Which makes it all the more concerning when Donald Trump starts saying things like this (in response to a veiled threat by the Russian Foreign Minister about their nuclear arsenal):

“I have ordered two Nuclear Submarines to be positioned in the appropriate regions, just in case these foolish and inflammatory statements are more than just that. Words are very important, and can often lead to unintended consequences, I hope this will not be one of those instances.”

But with a probability of avoiding “unintended consequences” less than fifty-fifty by 2100? That really doesn’t feel like good enough odds to me.

The 1960s version of The Magnificent Seven (itself a remake of Kurosawa’s Seven Samurai) before most of them were shot dead

In my last post, I suggested that there appeared to be a campaign to impugn the character of the younger generation as cover for reducing graduate recruitment, partly because of the desire to make AI systems of various sorts handle a wider and wider range of tasks. However there are other reasons why the value of AI needs to be promoted to the point where if your toaster or fridge is not using a chip they absolutely should be. It is all about the dependence of the US stock market on the so-called Magnificent 7 companies: Alphabet (Google), Apple, Meta (Facebook), Tesla, Amazon, Microsoft and Nvidia whose combined market capitalisation as at 22 July was 31% of the S&P500.

Nvidia? Who are they? They produce silicon chips. As Laura Bratton wrote in May:

As of Nvidia’s 2025 fiscal fourth quarter (the three months ending on Jan. 26 of this year), Bloomberg estimates that Microsoft spends roughly 47% of its capital expenditures directly on Nvidia’s chips and accounts for nearly 19% of Nvidia’s revenue on an annualized basis.

Meanwhile, 25% of Meta’s capital expenditures go to Nvidia and the company accounts for just over 9% of Nvidia’s annual revenue.

Amazon, Alphabet and Tesla are also big customers.

Nvidia is a growth stock, which means that it needs continued growth to support its share price. Once it ceases to be a growth stock then the kind of price earnings ratio it currently enjoys (nudging up to 60, by comparison the price earnings ratio of, say, HSBC is around 17.5) will no longer be acceptable to investors and a large correction in the share price will happen. So a growth slowdown in the Magnificent 7 is big news.

What would prevent a growth slowdown? Well a lot of processing-heavy sales for Facebook, Amazon, Apple and Google primarily. That is why there is now an AI overview of your Google search, why Rufus sits at the bottom of your Amazon search and everything appears to have a voice activated capability which can be accessed via Alexa or Siri these days.

Of course I am not arguing that there are not uses for large language models (LLMs) and other technologies currently wrapped up in the term AI. Seth Godin, usually a first mover in this space, has produced a set of cards with prompts for your LLM that you can tailor for various uses. Many people are seeing how AI applications can cut down the time they spend on everything from diary management to constructing PowerPoint presentations. There is no doubt that use of AI will have changed the way we do some things in a few years’ time. It will not, however, have replaced all of the jobs in Microsoft’s list, from mathematician to geographer to historian to writer. If you want a (much) fuller critique of what is misguided about the AI bubble, I refer you to The Hater’s Guide To The AI Bubble.

There is a lot of rough surrounding a few diamonds and the conditions for a bubble are all there. We know this because we have been here before. On 10 March 2000, the dotcom bubble burst. As Goldman Sachs puts it:

The Nasdaq index rose 86% in 1999 alone, and peaked on March 10, 2000, at 5,048 units. The mega-merger of AOL with TimeWarner seemed to validate investors’ expectations about the “new economy”. Then the bubble imploded. As the value of tech stocks plummeted, cash-strapped internet startups became worthless in months and collapsed. The market for new IPOs froze. On October 4, 2002, the Nasdaq index fell to 1,139.90 units, a fall of 77% from its peak.

Fortune are now claiming that the current AI boom is bigger than the dotcom bubble. And even leading figures in the AI industry admit that it is already a bubble.

This is where it gets interesting. The FT, in its reflection on these parallels, appears to be comforted by the big names involved this time:

To be sure, the parallels are not exact. They never are. While most of the dotcom companies were ephemeral newcomers, the Mag 7 include some of the world’s most profitable and impressive groups including Apple, Amazon and Microsoft, as well as the main supplier to the AI economy, Nvidia.

But of course this is the reason why it’s worse this time. We were able to manage without the “ephemeral newcomers”, although Amazon‘s share price fell by 90% over 2 years and Microsoft lost 60%, so the comparison is not quite true. However these companies were not the foundations of the economy then that they are now.

If Nvidia is the essential supply chain for all the other 6 of the Magnificent 7, then its own supply chain is equally precarious. As Ed Conway’s excellent Material World points out, Nvidia is “fabless” (ie without its own fabrication plant) and relies on Taiwan Semiconductor Manufacturing Company (TSMC) for the manufacture of its processors. They in turn are completely dependent on the company which makes the machines essential to their manufacturing units, ASML. As Conway says:

As of this moment, ASML is the only company in the world capable of making these machines, and TSMC is, alongside Samsung, the only company capable of putting such technology into mass production.

And then there are the raw materials required in these industries. Much has been made, by Diane Coyle and others, of the “weightless” nature of our global economy. Conway demolishes this fairly comprehensively:

In 2019, the latest year of data at the time of writing, we mined, dug and blasted more materials from the earth’s surface than the sum total of everything we extracted from the dawn of humanity all the way through to 1950.

There is a place in North Carolina called Spruce Pines where they mine the purest quartz in the world. As one person Conway interviewed said:

“If you flew over the two mines in Spruce Pine with a crop duster loaded with a very particular powder, you could end the world’s production of semiconductors and solar panels within six months.”

Whereas China controls the solar panel market it is reliant on imports for its semiconductors. In 2017 this cost China more than Saudi Arabia exported in oil or the entire global trade in aircraft.

Conway muses on whether China would invade Taiwan because of this and concludes probably not.

“Even if China invaded Taiwan and even if TSMC’s fabs survived the assault…that would not resolve its issue. Fab 18 [TSMC’s plant] might be where the world’s most advanced chips are made, but they are mostly designed elsewhere”.

However it would certainly be hugely disruptive if that were your goal. So even if the share prices of the Magnificent 7 don’t plummet of their own accord, they might be eviscerated by a crop duster or an assault on Taiwan.

There are so many needles poised to prick this particular bubble it would seem prudent to be cautious as a company in how dependent you should make yourselves to AI technology over the next few years.

Last time I suggested that the changes to graduate recruitment patterns, due at least in part to technological change, appeared to be to the disadvantage of current graduates, both in terms of number of vacancies and in what they were being asked to do.

This immediately reminds me of the old Woody Allen joke from the opening monologue to Annie Hall:

Two elderly women are at a Catskills mountain resort, and one of ’em says: “Boy, the food at this place is really terrible.” The other one says, “Yeah, I know, and such … small portions.”

This would clearly be an uncomfortable position for Corporate Britain if it were accepted. So a push back is to be expected. The drop in graduate vacancies is hard to challenge so the next candidate is obviously the candidates themselves.

So hot on the heels of “Kids today need more discipline”, “Nobody wants to work”, “Students today aren’t prepared for college”, “Kids today are lazy”, “We are raising a generation of wimps” and “Kids today have too much freedom” (I refer you to Paul Fairie’s excellent collections of newspaper reports through history detailing these findings at regular intervals), we now have the FT, newspaper of choice for Corporate Britain, weighing in on “The Troubling Decline in Conscientiousness“, this time backed up by a whole series of graphs:

John Burn-Murdoch does a lot of great data work on a huge array of subjects which I have referred to often, but I find the quoted studies problematic for a number of reasons. First of all, there is the suspicion that young people have already been found guilty before looking for evidence to back this up. For instance, which came first here the “factors at work” or the “shifts”?

While a full explanation of these shifts requires thorough investigation, and there will be many factors at work, smartphones and streaming services seem likely culprits.

At one point John feels compelled to say:

While the terminology of personality can feel vague, the science is solid.

At which point he links to this study, defending the five-factor model of personality as a “biologically based human universal” which terrifies me a little. Now of course there are always studies pointing in lots of different directions for any piece of social science research and this is no exception. In this critique of the five-factor model (FFM), for instance, we find that:

While the two largest factors (Anxiety/Neuroticism and Extraversion) appear to have been universally accepted (e.g., in the pioneering factor-analytic work of R. B. Cattell, H. J. Eysenck, J. P. Guilford, and A. L. Comrey), the present critique suggests, nevertheless, that the FFM provides a less than optimal account of human personality structure.

I first saw the FT article via a post on LinkedIn, where there was one mild push back sitting alone amongst crowds of pile ons from people of my generation. After all it feels right, doesn’t it? But Chris Wagstaff, Senior Visiting Fellow at Bayes Business School, was spot on I feel, when he pointed out four potential behavioural biases at play here within the organisations where these young people are working:

  1. The decline in conscientiousness and some of the other traits identified could be a consequence of more senior colleagues not inviting or taking on board constructive challenge from younger colleagues, the calamity of conformity, i.e. groupthink, so demotivating the latter.
  2. Related to this is the tendency for many organisations to get their employees to live and breathe an often meaningless set of values and adhere to a blinkered way of doing things. Again, hugely frustrating and demotivating.
  3. Or perhaps we’re seeing way too many meetings being populated by way too many participants, meaning social loafing (ie when individual performance isn’t visible they simply hide behind others) is on the increase.
  4. Finally, remuneration structures might discourage entrepreneurial thinking and an element of risk taking (younger folk are less risk averse than older folk). Again, very demotivating.

These sound much more convincing “factors at play” to me than smart phones or streaming services, neither of which of course are the preserve of the young. But demonising the young is an essential prelude to feeling better about denying them work or forcing them into some kind of reverse centaur position.

Corporate Britain needs to do better than pseudo-scientific victim blaming. There are real issues here around the next generation’s relationship with work and much else which need to be met head on. Your future pension income may depend upon it.

In a previous post, I mentioned the “diamond model” that accountancy firms are reportedly starting to talk about. The impact so far looks pretty devastating for graduates seeking work:

And then by industry:

Meanwhile, Microsoft have recently produced a report into the occupational implications of generative AI and their top 40 vulnerable roles looks like this (look at where data scientist, mathematician and management analyst sit – all noticeably more replaceable by AI than model which caused all the headlines when Vogue did it last week):

So this looks like a process well underway rather than a theoretical one for the future. But I want to imagine a few years ahead. Imagine that this process has continued to gut what we now regard as entry level jobs and that the warning of Dario Amodei, CEO of AI company Anthropic, that half of “administrative, managerial and tech jobs for people under 30” could be gone in 5 years, has come to pass. What then?

Well this is where it gets interesting (for some excellent speculative fiction about this, the short story Human Resources and novel Service Model by Adrian Tchaikovsky will certainly give you something to think about), because there will still be a much smaller number of jobs in these roles. They will be very competitive. Perhaps we will see FBI kind of recruitment processes becoming more common for the rarified few, probably administered by the increasingly capable systems I discuss below. They will be paid a lot more. However, as Cory Doctorow describes here, the misery of being the human in the loop for an AI system designed to produce output where errors are hard to spot and therefore to stop (Doctorow calls them, “reverse centaurs”, ie humans have become the horse part) includes being the ready made scapegoat (or “moral crumple zone” or “accountability sink“) for when they are inevitably used to overreach what they are programmed for and produce something terrible. The AI system is no longer working for you as some “second brain”. You are working for it, but no company is going to blame the very expensive AI system that they have invested in when there is a convenient and easily-replaceable (remember how hard these jobs will be to get) human candidate to take the fall. And it will be assumed that people will still do these jobs, reasoning that it is the only route to highly paid and more secure jobs later, or that they will be able to retire at 40, as the aspiring Masters of the Universe (the phrase coined by Tom Wolfe in The Bonfire of the Vanities) in the City of London have been telling themselves since the 1980s, only this time surrounded by robot valets no doubt.

But a model where all the gains go to people from one, older, generation at the expense of another, younger, generation depends on there being reasonable future prospects for that younger generation or some other means of coercing them.

In their book, The Future of the Professions, Daniel and Richard Susskind talk about the grand bargain. It is a form of contract, but, as they admit:

The grand bargain has never formally been reduced to writing and signed, its terms have never been unambiguously and exhaustively articulated, and noone has actually consented expressly to the full set of rights and obligations that it seems to lay down.

Atul Gawande memorably expressed the grand bargain for the medical profession (in Better) as follows:

The public has granted us extraordinary and exclusive dispensation to administer drugs to people, even to the point of unconsciousness, to cut them open, to do what would otherwise be considered assault, because we do so on their behalf – to save their lives and provide them comfort.

The Susskinds questioned (in 2015) whether this grand bargain could survive a future of “increasingly capable systems” and suggested a future when all 7 of the following models were in use:

  1. The traditional model, ie the grand bargain as it works now. Human professionals providing their services face-to-face on a time-cost basis.
  2. The networked experts model. Specialists work together via online networks. BetterDoctor would be an example of this.
  3. The para-professional model. The para-professional has had less training than the traditional professional but is equipped by their training and support systems to deliver work independently within agreed limits. The medical profession’s battle with this model has recently given rise to the Leng Review.
  4. The knowledge engineering model. A system is made available to users, including a database of specialist knowledge and the modelling of specialist expertise based on experience in a form that makes it accessible to users. Think tax return preparation software or medical self-diagnosis online tools.
  5. The communities of experience model, eg Wikipedia.
  6. The embedded knowledge model. Practical expertise built into systems or physical objects, eg intelligent buildings which have sensors and systems that test and regulate the internal environment of a building.
  7. The machine-generated model. Here practical expertise is originated by machines rather than by people. This book was written in 2015 so the authors did not know about large language models then, but these would be an obvious example.

What all of these alternative models had in common of course was the potential to no longer need the future traditional model professional.

There is another contract which has never been written down: that between the young and the old in society. Companies are jumping the gun on how the grand bargain is likely to be re-framed and adopting systems before all of the evidence is in. As Doctorow said in March (ostensibly about Musk’s DOGE when it was in full firing mode):

AI can’t do your job, but an AI salesman (Elon Musk) can convince your boss (the USA) to fire you and replace you (a federal worker) with a chatbot that can’t do your job

What strikes me is that the boss in question is generally at least 55. As one consultancy has noted:

Notably, the youngest Baby Boomers turned 60 in 2024—the average age of senior leadership in the UK, particularly for non-executive directors. Executive board directors tend to be slightly younger, averaging around 55.

Assume there was some kind of written contract between young and old that gave the older generation the responsibility to be custodian of all of the benefits of living in a civilised society while they were in positions of power so that life was at least as good for the younger generation when they succeeded them.

Every time a Baby Boomer argues that the state pension age increases because “we” cannot afford it, he or she is arguing both for the worker who will then be paying for his or her pension to continue to do so and that they should accept a delay in when they will get their quid pro quo, with no risk that the changes will be applied to the Boomer as all changes are flagged many years in advance. That contract would clearly be in breach. Every Boomer graduate from more than 35 years ago who argues for the cost of student loans to increase when they never paid for theirs would break such a contract. Every Boomer homeowner who argues against any measure which might moderate the house price inflation which they benefit from in increased equity would break such a contract. And of course any such contract worth its name would require strenuous efforts to limit climate change.

And a Boomer who removes a graduate job to temporarily support their share price (so-called rightsizing) in favour of a necessarily not-yet-fully-tested (by which I mean more than testing the software but also all of the complicated network of relationships required to make any business operate successfully) system then the impact of that temporary inflation of the share price on executive bonuses is being valued much more highly than both the future of the business and of the generation that will be needed to run it.

This is not embracing the future so much as selling a futures contract before setting fire to the actual future. And that is not a contract so much as an abusive relationship between the generations.