In my last post I referred to Dan Wang’s excellent new book, Breakneck, which I have now read at (for me) breakneck speed, finishing it in a week. It has made me realise how very little I knew about China.

Wang makes the point that China today is reminiscent of the US of a century ago. However he also makes the point that parts of the US were terrible to live in then: from racist segregation and lack of representation, to massive industrial pollution and insensitive planning decisions. As he says of the US:

The public soured on the idea of broad deference to US technocrats and engineers: urban planners (who were uprooting whole neighborhoods), defense officials (who were prosecuting the war in Vietnam), and industry regulators (who were cozying up to companies).

China meanwhile has a Politburo stuffed with engineers and is capable of making snap decisions without much regard to what people want. There is a sense of precarity about life there, with people treated as aggregates rather than as individuals. The country can take off in different directions very quickly and often does – there is a telling passage about the totally different life experiences of someone born in 1959 compared to someone born in 1949 (the worst year to be born in China according to Wang) – and even the elites can be dealt with brutally if they fall out of line with the current direction of travel. But they have created some impressive infrastructure, something which has become problematic for the US. Only around 10% of its GDP goes towards social spending, compared to 20% in the US and 30% amongst some European states, so there is no effective safety net. Think of the US portrayed in (as Christmas is fast approaching) “It’s a Wonderful Life” – a life that is hard to the point of brutality with destitution only one mistake away. And there is a level of social control alien to the west, controlling where people can live and work and very repressive of ethnoreligious minorities. And yet there is a feeling of progress and forward momentum which appears to be popular with most people in China.

As Wang notes at the end of his introduction:

“Breakneck” is the story of the Chinese state that yanked its people into modernity – an action rightfully envied by much of the world – using means that ran roughshod over many – an approach rightfully disdained by much of the world. It is also a reminder that the United States once knew the virtues of speed and ambitious construction.

The chapter on the one child policy, which ran for 35 years, is particularly chilling (China announced its first population fall in 2023 and its population is projected to halve to 700 million by 2100), and now the pressure is on women to have more children again. There is also a chapter on how China dealt with Covid – Wang experienced this first hand from Shanghai for 3 years – which made me understand perhaps why we wasted so much money in the UK on Track and Trace. You would need to be an engineering state to see it through successfully, and China ended up taking it too far in the end.

The economics of China is really interesting. As Wang notes:

China’s overbuilding has produced deep social, financial and environmental costs. The United States has no need to emulate it uncritically. But the Chinese experience does offer political lessons for America. China has shown that financial constraints are less binding than they are cracked up to be. As John Maynard Keynes said, “Anything we can actually do we can afford.” For an infrastructure-starved place like the United States, construction can generate long-run gains from higher economic activity that eventually surpass the immediate construction costs. And the experience of building big in underserved places is a means of redistribution that makes locals happy while satisfying fiscal conservatives who are normally skeptical of welfare payments.

This goes just as much for the UK, where pretty much everywhere outside London is infrastructure-starved (and, as Nicholas Shaxson and John Christensen show here in their written evidence to a UK Parliamentary Committee, even where infrastructure is built outside London, the financing of it sucks money away from the area where the infrastructure is being built and towards finance centres, predominantly in London), but there is also strong resistance from all the main parties to significant redistribution via the benefit system. This results in inequalities which even the FT feels moved to comment on and a map of multiple deprivation in England which looks like this:

The good news is that it doesn’t have to be this way in the UK, there are prominent examples of countries operating in a different way, eg China. The bad news is that China is not doing it because of economics. They are doing it because the state was set up to build big from the beginning. It is in its nature. The lesson of China is that it will keep doing the same things whatever the situation (eg trying to fix the population fall caused by an engineering solution with another engineering solution). Sometimes the world economy will reward their approach and sometimes it will punish it, but that will not be the primary driver for how they behave. I think this may be true of the US, the EU states and the UK too.

Daniel Kahneman showed us in Thinking Fast and Slow, how most of our mental space is used to rationalise decisions we have already taken. One of the places where I part company with Wang is in his reverence for economists. He believes that the US should listen more to both engineers and economists to challenge the lawyerly society.

In the foreword for The Principles of Economics Course from 1990 by Phillip Saunders and William Walstad, Paul Samuelson, the first person from the US to win the Nobel Memorial Prize in Economic Sciences in 1970, wrote:

“Poets are the unacknowledged legislators of the World.” It was a poet who said that, exercising occupational license. Some sage, it may have been I, declared in similar vein: “I don’t care who writes a nation’s laws—or crafts its advanced treaties—if I can write its economic textbooks.” The first lick is the privileged one, impinging on the beginner’s tabula rasa at its most impressionable state.

My view would be that the economists are already in charge.

As a result, my fear is that economics is now used for rationalising decisions we have already made in many countries now, including our own. We are going to do what we are going to do. The economics is just the fig leaf we use to rationalise what may otherwise appear unfair, cruel, divisive and hope-denying policies. The financial constraints are less than they are cracked up to be, but they are a convenient fiction for a government which lacks any guiding principles for spending and investment otherwise and therefore fears that everyone would just be asking for more resources in its absence, and they would have no way of deciding between them.

New (left) and old (right) Naiku shrines during the 60th sengu at Ise Jingu, 1973, via Bock 1974

In his excellent new book, Breakneck, Dan Wang tells the story of the high-speed rail links which started to be constructed in 2008 between San Francisco and Los Angeles and between Beijing and Shanghai respectively. Both routes would be around 800 miles long when finished. The Beijing-Shanghai line opened in 2011 at a cost of $36 billion. To date, California has built only a small stretch of their line, as yet nowhere near either Los Angeles or San Francisco, and the latest estimate of the completed bill is $128 billion. Wang uses this, amongst other examples to draw a distinction between the engineering state of China “building big at breakneck speed” and the lawyerly society of the United States “blocking everything it can, good and bad”.

Europe doesn’t get much of a mention, other than to be described as a “mausoleum”, which sounds rather JD Vance and there is quite a lot about this book that I disagree with strongly, which I will return to. However there is also much to agree with in this book, and none more so than when Wang talks about process knowledge.

Wang tells another story, of Ise Jingu in Japan. Every 20 years exact copies of Naiku, Geku, and 14 other shrines here are built on vacant adjacent sites, after which the old shrines are demolished. Altogether 65 buildings, bridges, fences, and other structures are rebuilt this way. They were first built in 690. In 2033, they will be rebuilt for the 63rd time. The structures are built each time with the original 7th century techniques which involve no nails, just dowels and wood joints. Staff have a 200 year tree planting plan to ensure enough cypress trees are planted to make the surrounding forest self-sufficient. The 20 year intervals between rebuilding are the length of the generations, the older passing on the techniques to the younger.

This, rather like the oral tradition of folk stories and songs, which were passed on by each generation as contemporary narratives until they were all written down and fixed in time so that they quickly appeared old-fashioned thereafter, is an extreme example of process knowledge. What is being preserved is not the Trigger’s Broom of temples at Ise Jingu, but the practical knowledge of how to rebuild them as they were originally built.

Trigger’s Broom. Source: https://www.youtube.com/watch?v=BUl6PooveJE

Process knowledge is the know-how of your experienced workforce that cannot easily be written down. It can develop where such a workforce work closely with researchers and engineers to create feedback loops which can also accelerate innovation. Wang contrasts Shenzhen in China where such a community exists, with Silicon Valley where it doesn’t, forcing the United States to have such technological wonders as the iPhone manufactured in China.

What happens when you don’t have process knowledge? Well one example would be our nuclear industry, where lack of experience of pressurised water reactors has slowed down the development of new power stations and required us to rely considerably on French expertise. There are many other technical skill shortages.

China has recognised the supreme importance of process knowledge as compared to the American concern with intellectual property (IP). IP can of course be bought and sold as a commodity and owned as capital, whereas process knowledge tends to rest within a skilled workforce.

This may then be the path to resilience for the skilled workers of the future in the face of the AI-ification of their professions. Companies are being sold AI systems for many things at the moment, some of which will clearly not work with few enough errors, or without so much “human validation” (a lovely phrase a good friend of mine actively involved in integrating AI systems into his manufacturing processes used recently) that they are not deemed practical. For early career workers entering these fields the demonstration of appropriate process knowledge, or the ability to develop it very quickly, may be the key to surviving the AI roller coaster they face over the next few years. Actionable skills and knowledge which allow them to manage such systems rather than being managed by them. To be a centaur rather than a reverse-centaur.

Not only will such skills make you less likely to lose your job to an AI system, they will also increase your value on the employment market: the harder these skills and knowledge are to acquire, the more valuable they are likely to be. But whereas in the past, in a more static market, merely passing your exams and learning coding might have been enough for an actuarial student for instance, the dynamic situation which sees everything that can be written down disappearing into prompts in some AI system will make such roles unprotected.

Instead it will be the knowledge about how people are likely to respond to what you say in a meeting or write in an email or report, and the skill to strategise around those things, knowing what to do when the rules run out, when situations are genuinely novel, ie putting yourself in someone else’s shoes and being prepared to make judgements. It will be the knowledge about what matters in a body of data, putting the pieces together in meaningful ways, and the skills to make that obvious to your audience. It will be the knowledge about what makes everyone in your team tick and the skills to use that knowledge to motivate them to do their best work. It will ultimately be about maintaining independent thought: the knowledge of why you are where you are and the skill to recognise what you can do for the people around you.

These have not always been seen as entry level skills and knowledge for graduates, but they are increasingly going to need to be as the requirement grows to plug you in further up an organisation if at all as that organisation pursues its diamond strategy or something similar. And alongside all this you will need a continuing professional self-development programme on steroids going on to fully understand the systems you are working with as quickly as possible and then understand them all over again when they get updated, demanding evidence and transparency and maintaining appropriate uncertainty when certainty would be more comfortable for the people around you, so that you can manage these systems into the areas where they can actually add value and out of the areas where they can cause devastation. It will be more challenging than transmitting the knowledge to build a temple out of hay and wood 20 years into the future, and will be continuous. Think of it as the Trigger’s Broom Process of Career Management if you like.

These will be essential roles for our economic future: to save these organisations from both themselves and their very expensive systems. It will be both enthralling and rewarding for those up to the challenge.

Wallace & Gromit: Vengeance Most Fowl models on display in Bristol. This file is licensed under the Creative Commons Attribution-Share Alike 4.0 International license.

I have been watching Daniel Susskind’s lectures on AI and the future of work this week: Automation Anxiety was delivered in September and The Economics of Work and Technology earlier this week. The next in the series, entitled Economics and Artificial Intelligence is scheduled for 13 January. They are all free and I highly recommend them for their great range of source material presented.

In my view the most telling graph, which featured in both lectures, was this one:

Original Source: Daniel Susskind A World Without Work

Susskind extended the usual concept of the ratio between average college and university graduate salaries to those of school leavers to include the equivalent ratio of craftsmen to labourers which then gives us data back to 1220. There are two big collapses in this ratio in the data: that following the Black Death (1346-1353), which may have killed 50% of Europe’s 14th century population, and the Industrial Revolution (which slow singularity started around 1760 and then took us through the horrors of the First World War and the Great Depression before the graph finally picks up post Bretton Woods).

As Susskind shows, the profits from the Industrial Revolution were not going to workers:

Source: The Technology Trap, Carl Benedikt Frey

So how is the AI Rush comparing? Well Susskind shared another graph:

Source: David Autor Work of the Past, Work of the future

This, from 2019, introduced the idea that the picture is now more complex than high-skilled and low-skilled workers, now there is a middle. And, as Autor has set out more recently, the middle is getting squeezed:

Key dynamics at play include:

  • Labor Share Decline: OECD data reveal a 3–5 percentage point drop in labor’s share of income in sectors most exposed to AI, a trend likely to accelerate as automation deepens.
  • Wage Polarization: The labor market is bifurcating. On one end, high-complexity “sense-making” roles; on the other, low-skill service jobs. The middle is squeezed, amplifying both political risk and regulatory scrutiny.
  • Productivity Paradox 2.0: Despite the promise of AI-driven efficiency, productivity gains remain elusive. The real challenge is not layering chatbots atop legacy processes, but re-architecting workflows from the ground up—a costly and complex endeavor.

For enterprise leaders, the implications are profound. AI is best understood not as a job destroyer, but as a “skill-lowering” platform. It enables internal labor arbitrage, shifting work toward judgment-intensive, context-rich tasks while automating the rest. The risk is not just technological—it is deeply human. Skill depreciation now sits alongside cyber and climate risk on the board agenda, demanding rigorous workforce-reskilling strategies and a keen eye on brand equity as a form of social license.

So, even if the overall number of jobs may not be reduced, the case being made is that the average skill level required to carry them out will be. As Susskind said, the Luddites may have been wrong about the spinning jenny replacing jobs, but it did replace and transform tasks and its impact on workers was to reduce their pay, quality of work, status as craftsmen and economic power. This looks like the threat being made by employers once again, with real UK wages already still only at the level they were at in 2008:

However this is where I part company with Susskind’s presentation, which has an implicit inevitability to it. The message is that these are economic forces we can’t fight against. When he discusses whether the substituting force (where AI replaces you) or the complementing force (where AI helps you to be more productive and increases the demand for your work) will be greater, it is almost as if we have no part to play in this. There is some cognitive dissonance when he quotes Blake, Engels, Marx and Ruskin about the horrors of living through such times, but on the whole it is presented as just a natural historical process that the whole of the profits from the massive increases in productivity of the Industrial Revolution should have ended up in the pockets of the fat guys in waistcoats:

Richard Arkwright, Sir Robert Peel, John Wilkinson and Josiah Wedgwood

I was recently at Cragside in Northumberland, where the arms inventor and dealer William Armstrong used the immense amount of money he made from selling big guns (as well as big cranes and the hydraulic mechanism which powers Tower Bridge) to decking out his house and grounds with the five artificial lakes required to power the world’s first hydro-electric lighting system. His 300 staff ran around, like good reverse-centaurs, trying to keep his various inventions from passenger lifts to an automated spit roast from breaking down, so that he could impress his long list of guests and potential clients to Cragside, from the Shah of Persia to the King of Siam and two future Prime Ministers of Japan. He made sure they were kept running around with a series of clock chimes throughout the day:

However, with some poetic irony, the “estate regulator” is what has since brought the entire mechanism crashing to a halt:

Which brings me to Wallace and Gromit. Wallace is the inventor, heedless of the impact of his inventions on those around him and especially on his closest friend Gromit, who he regularly dumps when he becomes inconvenient to his plans. Gromit just tries to keep everything working.

Wallace is a cheese-eating monster who cannot be assessed purely on the basis of his inventions. And neither can Armstrong, Arkwright, Peel, Wilkinson or Wedgwood. We are in the process of allowing a similar domination of our affairs by our new monsters:

Meta CEO Mark Zuckerberg beside Amazon CEO Jeff Bezos and his fiancée (now wife) Lauren, Google CEO Sundar Pichai and Elon Musk at President Trump’s 2nd Inauguration.

Around half an hour into his second lecture, Daniel Susskind started talking about pies. This is the GDP pie (Susskind has also written a recent book on Growth: A Reckoning, which argues that GDP growth can go on forever – my view would be closer to the critique here from Steve Keen) which, as Susskind says, increased by a factor of 113 in the UK between 1700 and 2000. But, as Steve Keen says:

The statistics strongly support Jevons’ perspective that energy—and specifically, energy from coal—caused rising living standards in the UK (see Figure 2). Coal, and not a hypothesised change in culture, propelled the rise in living standards that Susskind attributes to intangible ideas.

Source: https://www.themintmagazine.com/growth-some-inconvenient-truths/

Susskind talks about the productivity effect, he talks about the bigger pie effect and then he talks about the changing pie effect (ie changes to the types of work we do – think of the changes in the CPI basket of goods and services) as ways in which jobs are created by technological change. However he has nothing to say about just giving less of the pie to the monsters. Instead for Susskind the AI Rush is all about clever people throwing 10 times the amount of money at AI as was directed at the Manhattan Project and the heads of OpenAI, Anthropic and Google DeepMind stating that AI will replace humans in all economically useful tasks in 10 years, a claim which he says we should take seriously. Cory Doctorow, amongst others, disagrees. In his latest piece, When AI prophecy fails, he has this to say about why companies have reduced recruitment despite the underperformance of AI systems to date:

All this can feel improbable. Would bosses really fire workers on the promise of eventual AI replacements, leaving themselves with big bills for AI and falling revenues as the absence of those workers is felt?

The answer is a resounding yes. The AI industry has done such a good job of convincing bosses that AI can do their workers’ jobs that each boss for whom AI fails assumes that they’ve done something wrong. This is a familiar dynamic in con-jobs.

The Industrial Revolution had a distribution problem which gave birth to Chartism, Marxism, the Trades Union movement and the Labour Party in the UK alone. And all of that activity only very slowly chipped away at the wealth share of the top 10%:

Source: https://equalitytrust.org.uk/scale-economic-inequality-uk/

However the monsters of the Industrial Revoution did at least have solid proof that they could deliver what they promised. You don’t get more concrete a proof of concept than this after all:

View on the Thames and the opening Tower Bridge, London, from the terraces at Wapping High Street, at sunset in July 2013, Bert Seghers. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.

The AI Rush has a similar distribution problem, but it is also the first industrial revolution since the global finance industry decoupled from the global real economy. So the wealth share of the Top 10% isn’t going back up fast enough? No problem. Just redistribute the money at the top even further up:

What the monsters of the AI Rush lack is anything tangible to support their increasingly ambitious assertions. Wallace may be full of shit. And the rest of us can all just play a Gromit-like support role until we find out one way or the other or concentrate on what builds resilient communities instead.

Whether you think the claims for the potential of AI are exaggerated; or that the giant bet on it that the US stock market has made will end in an enormous depression; or that the energy demands of this developing technology will be its constraining force ultimately; or that we are all just making the world a colder place by prioritising systems, however capable, over people: take your pick as a reason to push back against the AI Rush. But my bet would be on the next 10 years not being dominated by breathless commentary on the exploits of Tech Bros.

To Generation Z: a message of support from a Boomer

So you’ve worked your way through school and now university, developing the skills you were told would always be in high demand, credentialising yourself as a protection against the vagaries of the global economy. You may have serious doubts about ever being able to afford a house of your own, particularly if your area of work is very concentrated in London…

…and you resent the additional tax that your generation pays to support higher education:

Source: https://taxpolicy.org.uk/2023/09/24/70percent/

But you still had belief in being able to operate successfully within the graduate market.

A rational functional graduate job market should be assessing your skills and competencies against the desired attributes of those currently performing the role and making selections accordingly. That is a system both the companies and graduates can plan for.

It is very different from a Rush. The first phenomenon known as a Rush was the Californian Gold Rush of 1848-55. However the capitalist phenomenon of transforming an area to facilitate intensive production probably dates from sugar production in Madeira in the 15th century. There have been many since, but all neatly described by this Punch cartoon from 1849:

A Rush is a big deal. The Californian Gold Rush resulted in the creation of California, now the 5th largest economy in the world. But when it comes to employment, a Rush is not like an orderly jobs market. As Carlo Iacono describes, in an excellent article on the characteristics of the current AI Rush:

The railway mania of the 1840s bankrupted thousands of investors and destroyed hundreds of companies. It also left Britain with a national rail network that powered a century of industrial dominance. The fibre-optic boom of the late 1990s wiped out about $5 trillion in market value across the broader dot-com crash. It also wired the world for the internet age.

A Rush is a difficult and unpredictable place to build a career, with a lot riding on dumb luck as much as any personal characteristics you might have. There is very little you can count on in a Rush. This one is even less predictable because as Carlo also points out:

When the railway bubble burst in the 1840s, the steel tracks remained. When the fibre-optic bubble burst in 2001, the “dark fibre” buried in the ground was still there, ready to carry traffic for decades. These crashes were painful, but they left behind durable infrastructure that society could repurpose.

Whereas the 40–60% of US real GDP growth in the first half of 2025 explained by investment in AI infrastructure isn’t like that:

The core assets are GPUs with short economic half-lives: in practice, they’re depreciated over ~3–5 years, and architectures are turning over faster (Hopper to Blackwell in roughly two years). Data centres filled with current-generation chips aren’t valuable, salvageable infrastructure when the bubble bursts. They’re warehouses full of rapidly depreciating silicon.

So today’s graduates are certainly going to need resilience, but that’s just what their future employers are requiring of them. They also need to build their own support structures which are going to see them through the massive disruption which is coming whether or not the enormous bet on AI is successful or not. The battle to be centaurs, rather than reverse-centaurs, as I set out in my last post (or as Carlo Iacono describes beautifully in his discussion of the legacy of the Luddites here), requires these alliances. To stop thinking of yourselves as being in competition with each other and start thinking of yourselves as being in competition for resources with my generation.

I remember when I first realised my generation (late Boomer, just before Generation X) was now making the weather. I had just sat a 304 Pensions and Other Benefits actuarial exam in London (now SP4 – unsuccessfully as it turned out), and nipped in to a matinee of Sam Mendes’ American Beauty and watched the plastic bag scene. I was 37 at the time.

My feeling is that despite our increasingly strident efforts to do so, our generation is now deservedly losing power and is trying to hang on by making reverse centaurs of your generation as a last ditch attempt to remain in control. It is like the scene in another movie, Triangle of Sadness, where the elite are swept onto a desert island and expect the servant who is the only one with survival skills in such an environment to carry on being their servant.

Don’t fall for it. My advice to young professionals is pretty much the same as it was to actuarial students last year on the launch of chartered actuary status:

If you are planning to join a profession to make a positive difference in the world, and that is in my view the best reason to do so, then you are going to have to shake a few things up along the way.

Perhaps there is a type of business you think the world is crying out for but it doesn’t know it yet because it doesn’t exist. Start one.

Perhaps there is an obvious skill set to run alongside your professional one which most of your fellow professionals haven’t realised would turbo-charge the effectiveness of both. Acquire it.

Perhaps your company has a client who noone has taken the time to put themselves in their shoes and communicate in a way they will properly understand and value. Be that person.

Or perhaps there are existing businesses who are struggling to manage their way in changing markets and need someone who can make sense of the data which is telling them this. Be that person.

All why remaining grounded in which ever community you have chosen for yourself. Be the member of your organisation or community who makes it better by being there.

None of these are reverse centaur positions. Don’t settle for anything less. This is your time.

In 2017, I was rather excitedly reporting about ideas which were new to me at the time regarding how technology or, as Richard and Daniel Susskind referred to it in The Future of the Professions, “increasingly capable machines” were going to affect professional work. I concluded that piece as follows:

The actuarial profession and the higher education sector therefore need each other. We need to develop actuaries of the future coming into your firms to have:

  • great team working skills
  • highly developed presentation skills, both in writing and in speech
  • strong IT skills
  • clarity about why they are there and the desire to use their skills to solve problems

All within a system which is possible to regulate in a meaningful way. Developing such people for the actuarial profession will need to be a priority in the next few years.

While all of those things are clearly still needed, it is becoming increasingly clear to me now that they will not be enough to secure a job as industry leaders double down.

Source: https://www.ft.com/content/99b6acb7-a079-4f57-a7bd-8317c1fbb728

And perhaps even worse than the threat of not getting a job immediately following graduation is the threat of becoming a reverse-centaur. As Cory Doctorow explains the term:

A centaur is a human being who is assisted by a machine that does some onerous task (like transcribing 40 hours of podcasts). A reverse-centaur is a machine that is assisted by a human being, who is expected to work at the machine’s pace.

We have known about reverse-centaurs since at least Charlie Chaplin’s Modern Times in 1936.

By Charlie Chaplin – YouTube, Public Domain, https://commons.wikimedia.org/w/index.php?curid=68516472

Think Amazon driver or worker in a fulfillment centre, sure, but now also think of highly competitive and well-paid but still ultimately human-in-the-loop kinds of roles being responsible for AI systems designed to produce output where errors are hard to spot and therefore to stop. In the latter role you are the human scapegoat, in the phrasing of Dan Davies, “an accountability sink” or in that of Madeleine Clare Elish, a “moral crumple zone” all rolled into one. This is not where you want to be as an early career professional.

So how to avoid this outcome? Well obviously if you have other options to roles where a reverse-centaur situation is unavoidable you should take them. Questions to ask at interview to identify whether the role is irretrievably reverse-centauresque would be of the following sort:

  1. How big a team would I be working in? (This might not identify a reverse-centaur role on its own: you might be one of a bank of reverse-centaurs all working in parallel and identified “as a team” while in reality having little interaction with each other).
  2. What would a typical day be in the role? This should smoke it out unless the smokescreen they put up obscures it. If you don’t understand the first answer, follow up to get specifics.
  3. Who would I report to? Get to meet them if possible. Establish whether they are technical expert in the field you will be working in. If they aren’t, that means you are!
  4. Speak to someone who has previously held the role if possible. Although bear in mind that, if it is a true reverse-centaur role and their progress to an actual centaur role is contingent on you taking this one, they may not be completely forthcoming about all of the details.

If you have been successful in a highly competitive recruitment process, you may have a little bit of leverage before you sign the contract, so if there are aspects which you think still need clarifying, then that is the time to do so. If you recognise some reverse-centauresque elements from your questioning above, but you think the company may be amenable, then negotiate. Once you are in, you will understand a lot more about the nature of the role of course, but without threatening to leave (which is as damaging to you as an early career professional as it is to them) you may have limited negotiation options at that stage.

In order to do this successfully, self knowledge will be key. It is that point from 2017:

  • clarity about why they are there and the desire to use their skills to solve problems

To that word skills I would now add “capabilities” in the sense used in a wonderful essay on this subject by Carlo Iacono called Teach Judgement, Not Prompts.

You still need the skills. So, for example, if you are going into roles where AI systems are producing code, you need to have sufficiently good coding skills yourself to create a programme to check code written by the AI system. If the AI system is producing communications, your own communication skills need to go beyond producing work that communicates to an audience effectively to the next level where you understand what it is about your own communication that achieves that, what is necessary, what is unnecessary, what gets in the way of effective communication, ie all of the things that the AI system is likely to get wrong. Then you have a template against which to assess the output from an AI system, and for designing better prompts.

However specific skills and tools come and go, so you need to develop something more durable alongside them. Carlo has set out four “capabilities” as follows:

  1. Epistemic rigour, which is being very disciplined about challenging what we actually know in any given situation. You need to be able to spot when AI output is over-confident given the evidence, or when a correlation is presented as causation. What my tutors used to refer to as “hand waving”.
  2. Synthesis is about integrating different perspectives into an overall understanding. Making connections between seemingly unrelated areas is something AI systems are generally less good at than analysis.
  3. Judgement is knowing what to do in a new situation, beyond obvious precedent. You get to develop judgement by making decisions under uncertainty, receiving feedback, and refining your internal models.
  4. Cognitive sovereignty is all about maintaining your independence of thought when considering AI-generated content. Knowing when to accept AI outputs and when not to.

All of these capabilities can be developed with reflective practice, getting feedback and refining your approach. As Carlo says:

These capabilities don’t just help someone work with AI. They make someone worth augmenting in the first place.

In other words, if you can demonstrate these capabilities, companies who themselves are dealing with huge uncertainty about how much value they are getting from their AI systems and what they can safely be used for will find you an attractive and reassuring hire. Then you will be the centaur, using the increasingly capable systems to improve your own and their productivity while remaining in overall control of the process, rather than a reverse-centaur for which none of that is true.

One sure sign that you are straying into reverse-centaur territory is when a disproportionate amount of your time is spent on pattern recognition (eg basing an email/piece of coding/valuation report on an earlier email/piece of coding/valuation report dealing with a similar problem). That approach was always predicated on being able to interact with a more experienced human who understood what was involved in the task at some peer review stage. But it falls apart when there is no human to discuss the earlier piece of work with, because the human no longer works there, or a human didn’t produce the earlier piece of work. The fake it until you make it approach is not going to work in environments like these where you are more likely to fake it until you break it. And pattern recognition is something an AI system will always be able to do much better and faster than you.

Instead, question everything using the capabilities you have developed. If you are going to be put into potentially compromising situations in terms of the responsibilities you are implicitly taking on, the decisions needing to be made and the limitations of the available knowledge and assumptions on which those decisions will need to be based, then this needs to be made explicit, to yourself and the people you are working with. Clarity will help the company which is trying to use these new tools in a responsible way as much as it helps you. Learning is going to be happening for them as much as it is for you here in this new landscape.

And if the company doesn’t want to have these discussions or allow you to hamper the “efficiency” of their processes by trying to regulate them effectively? Then you should leave as soon as you possibly can professionally and certainly before you become their moral crumple zone. No job is worth the loss of your professional reputation at the start of your career – these are the risks companies used to protect their senior people of the future from, and companies that are not doing this are clearly not thinking about the future at all. Which is likely to mean that they won’t have one.

To return to Cory Doctorow:

Science fiction’s superpower isn’t thinking up new technologies – it’s thinking up new social arrangements for technology. What the gadget does is nowhere near as important as who the gadget does it for and who it does it to.

You are going to have to be the generation who works these things out first for these new AI tools. And you will be reshaping the industrial landscape for future generations by doing so.

And the job of the university and further education sectors will increasingly be to equip you with both the skills and the capabilities to manage this process, whatever your course title.

Source: https://pluspng.com/img-png/mixed-economy-png–901.png

Just type “mixed economy graphic” into Google and you will get a lot of diagrams like this one – note that they normally have to pick out the United States for special mention. Notice the big gap between those countries – North Korea, Cuba, China and Russia – and us. It is a political statement masquerading as an economic one.

This same line is used to describe our political options. The Political Compass added an authoritarian/libertarian axis in their 2024 election manifesto analysis but the line from left to right (described as the economic scale) is still there:

Source: https://www.politicalcompass.org/uk2024

So here we are on our political and economic spectrum, where tiny movements between the very clustered Reform, Conservative, Labour and Liberal Democrat positions fill our newspapers and social media comment. The Greens and, presumably if it ever gets off the ground, Your Party are seen as so far away from the cluster that they often get left out of our political discourse. It is an incredibly narrow perspective and we wonder why we are stuck on so many major societal problems.

This is where we have ended up following the “slow singularity” of the Industrial Revolution I talked about in my last post. Our politics coalesced into one gymnasts’ beam, supported by the hastily constructed Late Modern English fashioned for this purpose in the 1800s, along which we have all been dancing ever since, between the market information processors at the “right” end and the bureacratic information processors at the “left” end.

So what does it mean for this arrangement if we suddenly introduce another axis of information processing, ie the large language AI models. I am imagining something like this:

What will this mean for how countries see their economic organisation? What will it mean for our politics?

In 1884, the English theologian, Anglican priest and schoolmaster Edwin Abbott Abbott published a satirical science fiction novella called Flatland: A Romance of Many Dimensions. Abbott’s satire was about the rigidity of Victorian society, depicted as a two-dimensional world inhabited by geometric figures: women are line segments, while men are polygons with various numbers of sides. We are told the story from the viewpoint of a square, which denotes a gentleman or professional. In this world three-dimensional shapes are clearly incomprehensible, with every attempt to introduce new ideas from this extra dimension considered dangerous. Flatland is not prepared to receive “revelations from another world”, as it describes anything existing in the third dimension, which is invisible to them.

The book was not particularly well received and fell into obscurity until it was embraced by mathematicians and physicists in the early 20th century as the concept of spacetime was being developed by Poincaré, Einstein and Minkowski amongst others. And what now looks like a prophetic analysis of the limitations of the gymnasts’ beam economic and political model of the slow singularity has continued to not catch on at all.

However, much as with Brewster’s Millions, the incidence of film adaptations of Flatland give some indication of when it has come back as an idea to some extent. This tells us that it wasn’t until 1965 until someone thought it was a good idea to make a movie of Flatland and then noone else attempted it until an Italian stop-motion film in 1982. There were then two attempts in 2007, which I can’t help but think of as a comment on the developing financial crisis at the time, and a sequel based on Bolland : een roman van gekromde ruimten en uitdijend heelal (which translates as: Sphereland: A Fantasy About Curved Spaces and an Expanding Universe), a 1957 sequel to Flatland in Dutch (which didn’t get translated into English until 1965 when the first animated film came out) by Dionys Burger, in 2012.

So here we are, with a new approach to processing information and language to sit alongside the established processors of the last 200 years or more. Will it perhaps finally be time to abandon Flatland? And if we do, will it solve any of our problems or just create new ones?

In 2017 I posted an article about how the future for actuaries was starting to look, with particular reference to a Society of Actuaries paper by Dodzi Attimu and Bryon Robidoux, which has since been moved to here.

I summarised their paper as follows at the time:

Focusing on…a paper produced by Dodzi Attimu and Bryon Robidoux for the Society of Actuaries in July 2016 explored the theme of robo actuaries, by which they meant software that can perform the role of an actuary. They went on to elaborate as follows:

Though many actuaries would agree certain tasks can and should be automated, we are talking about more than that here. We mean a software system that can more or less autonomously perform the following activities: develop products, set assumptions, build models based on product and general risk specifications, develop and recommend investment and hedging strategies, generate memos to senior management, etc.

They then went on to define a robo actuarial analyst as:

A system that has limited cognitive abilities but can undertake specialized activities, e.g. perform the heavy lifting in model building (once the specification/configuration is created), perform portfolio optimization, generate reports including narratives (e.g. memos) based on data analysis, etc. When it comes to introducing AI to the actuarial profession, we believe the robo actuarial analyst would constitute the first wave and the robo actuary the second wave.

They estimate that the first wave is 5 to 10 years away and the second 15 to 20 years away. We have been warned.

So 9 years on from their paper, how are things looking? Well the robo actuarial analyst wave certainly seems to be pretty much here, particularly now that large language models like ChatGPT are being increasingly used to generate reports. It suddenly looks a lot less fanciful to assume that the full robo actuary is less than 11 years away.

But now the debate on AI appears to be shifting to an argument between whether we are heading for Vernor Vinge’s “Singularity” where the increasingly capable systems

would not be humankind’s “tool” — any more than humans are the tools of rabbits or robins or chimpanzees

on the one hand, and, on the other, the idea that “it is going to take a long time for us to really use AI properly…, because of how hard it is to regear processes and organizations around new tech”.

In his article on Understanding AI as a social technology, Henry Farrell suggests that neither of these positions allow a proper understanding of the impact AI is likely to have, instead proposing the really interesting idea that we are already part way through a “slow singularity”, which began with the industrial revolution. As he puts it:

Under this understanding, great technological changes and great social changes are inseparable from each other. The reason why implementing normal technology is that so slow is that it requires sometimes profound social and economic transformations, and involves enormous political struggle over which kinds of transformation ought happen, which ought not, and to whose benefit.

This chimes with what I was saying recently about AI possibly not being the best place to look for the next industrial revolution. Farrell plausibly describes the current period using the words of Herbert Simon. As Farrell says: “Human beings have quite limited internal ability to process information, and confront an unpredictable and complex world. Hence, they rely on a variety of external arrangements that do much of their information processing for them.” So Simon says of markets, for instance, which:

appear to conserve information and calculation by assigning decisions to actors who can make them on the basis of information that is available to them locally – that is, without knowing much about the rest of the economy apart from the prices and properties of the goods they are purchasing and the costs of the goods they are producing.

And bureaucracies and business organisations, similarly:

like markets, are vast distributed computers whose decision processes are substantially decentralized. … [although none] of the theories of optimality in resource allocation that are provable for ideal competitive markets can be proved for hierarchy, … this does not mean that real organizations operate inefficiently as compared to real markets. … Uncertainty often persuades social systems to use hierarchy rather than markets in making decisions.

Large language models by this analysis are then just another form of complex information processing, “likely to reshape the ways in which human beings construct shared knowledge and act upon it, with their own particular advantages and disadvantages. However, they act on different kinds of knowledge than markets and hierarchies”. As an Economist article Farrell co-wrote with Cosma Shalizi says:

We now have a technology that does for written and pictured culture what largescale markets do for the economy, what large-scale bureaucracy does for society, and perhaps even comparable with what print once did for language. What happens next?

Some suggestions follow and I strongly recommend you read the whole thing. However, if we return to what I and others were saying in 2016 and 2017, it may be that we were asking the wrong question. Perhaps the big changes of behaviour required of us to operate as economic beings have already happened (the start of the “slow singularity” of the industrial revolution) and the removal of alternatives that required us to spend increasing proportions of our time within and interacting with bureacracies and other large organisations were the logical appendage to that process. These processes are merely becoming more advanced rather than changing fundamentally in form.

And the third part, ie language? What started with the emergence of Late Modern English in the 1800s looks like it is now being accelerated via a new way of complex information processing applied to written, pictured (and I would say also heard) culture.

So the future then becomes something not driven by technology, but by our decisions about which processes we want to allow or even encourage and which we don’t, whether those are market processes, organisational processes or large language processes. We don’t have to have robo actuaries or even robo actuarial analysts, but we do have to make some decisions.

And students entering this arena need to prepare themselves to be participants in those decisions rather than just victims of them. A subject I will be returning to.

Title page vignette of Hard Times by Charles Dickens. Thomas Gradgrind Apprehends His Children Louisa and Tom at the Circus, 1870

It was Fredric Jameson (according to Owen Hatherley in the New Statesman) who first said:

“It seems to be easier for us today to imagine the thoroughgoing deterioration of the earth and of nature than the breakdown of late capitalism”. I was reminded of this by my reading this week.

It all started when I began watching Shifty, Adam Curtis’ latest set of films on iPlayer aiming to convey a sense of shifting power structures and where they might lead. Alongside the startling revelation that The Land of Make Believe by Bucks Fizz was written as an anti-Thatcher protest song, there was a short clip of Eric Hobsbawm talking about all of the words which needed to be invented in the late 18th century and early 19th to allow people to discuss the rise of capitalism and its implications. So I picked up a copy of his The Age of Revolution 1789-1848 to look into this a little further.

The first chapter of Hobsbawm’s introduction from 1962, the year of my birth, expanded on the list:

Words are witnesses which often speak louder than documents. Let us consider a few English words which were invented, or gained their modern meanings, substantially in the period of sixty years with
which this volume deals. They are such words as ‘industry’, ‘industrialist’, ‘factory’, ‘middle class’, ‘working class’, ‘capitalism’ and ‘socialism’. They include ‘aristocracy’ as well as ‘railway’, ‘liberal’ and
‘conservative’ as political terms, ‘nationality’, ‘scientist’ and ‘engineer’, ‘proletariat’ and (economic) ‘crisis’. ‘Utilitarian’ and ‘statistics’, ‘sociology’ and several other names of modern sciences, ‘journalism’ and ‘ideology’, are all coinages or adaptations of this period. So is ‘strike’ and ‘pauperism’.

What is striking about these words is how they frame most of our economic and political discussions still. The term “middle class” originated in 1812. Noone referred to an “industrial revolution” until English and French socialists did in the 1820s, despite what it described having been in progress since at least the 1780s.

Today the founder of the World Economic Forum has coined the phrase “Fourth Industrial Revolution” or 4IR or Industry 4.0 for those who prefer something snappier. Its blurb is positively messianic:

The Fourth Industrial Revolution represents a fundamental change in the way we live, work and relate to one another. It is a new chapter in human development, enabled by extraordinary technology advances commensurate with those of the first, second and third industrial revolutions. These advances are merging the physical, digital and biological worlds in ways that create both huge promise and potential peril. The speed, breadth and depth of this revolution is forcing us to rethink how countries develop, how organisations create value and even what it means to be human. The Fourth Industrial Revolution is about more than just technology-driven change; it is an opportunity to help everyone, including leaders, policy-makers and people from all income groups and nations, to harness converging technologies in order to create an inclusive, human-centred future. The real opportunity is to look beyond technology, and find ways to give the greatest number of people the ability to positively impact their families, organisations and communities.

Note that, despite the slight concession in the last couple of sentences that an industrial revolution is about more then technology-driven change, they are clear that the technology is the main thing. It is also confused: is the future they see one in which “technology advances merge the physical, digital and biological worlds” to such an extent that we have “to rethink” what it “means to be human”? Or are we creating an “inclusive, human-centred future”?

Hobsbawm describes why utilitarianism (” the greatest happiness of the greatest number”) never really took off amongst the newly created middle class, who rejected Hobbes in favour of Locke because “he at least put private property beyond the range of interference and attack as the most basic of ‘natural rights'”, whereas Hobbes would have seen it as just another form of utility. This then led to this natural order of property ownership being woven into the reassuring (for property owners) political economy of Adam Smith and the natural social order arising from “sovereign individuals of a certain psychological constitution pursuing their self-interest in competition with one another”. This was of course the underpinning theory of capitalism.

Hobsbawm then describes the society of Britain in the 1840s in the following terms:

A pietistic protestantism, rigid, self-righteous, unintellectual, obsessed with puritan morality to the point where hypocrisy was its automatic companion, dominated this desolate epoch.

In 1851 access to the professions in Britain was extremely limited, requiring long years of education to support oneself through and opportunities to do so which were rare. There were 16,000 lawyers (not counting judges) but only 1,700 law students. There were 17,000 physicians and surgeons and 3,500 medical students and assistants. The UK population in 1851 was around 27 million. Compare these numbers to the relatively tiny actuarial profession in the UK today, with around 19,000 members overall in the UK.

The only real opening to the professions for many was therefore teaching. In Britain “76,000 men and women in 1851 described themselves as schoolmasters/mistresses or general teachers, not to mention the 20,000 or so governesses, the well-known last resource of penniless educated girls unable or unwilling to earn their living in less respectable ways”.

Admittedly most professions were only just establishing themselves in the 1840s. My own, despite actuarial activity getting off the ground in earnest with Edmund Halley’s demonstration of how the terms of the English Government’s life annuities issue of 1692 were more generous than it realised, did not form the Institute of Actuaries (now part of the Institute and Faculty of Actuaries) until 1848. The Pharmaceutical Society of Great Britain (now the Royal Pharmaceutical Society) was formed in 1841. The Royal College of Veterinary Surgeons was established by royal charter in 1844. The Royal Institute of British Architects (RIBA) was founded in 1834. The Society of Telegraph Engineers, later the Institute of Electrical Engineers (now part of the Institute of Engineering and Technology), was formed in 1871. The Edinburgh Society of Accountants and the Glasgow Institute of Accountants and Actuaries were granted royal charters in the mid 1850s, before England’s various accounting institutes merged into the Institute of Chartered Accountants in England and Wales in 1880.

However “for every man who moved up into the business classes, a greater number necessarily moved down. In the second place economic independence required technical qualifications, attitudes of mind, or financial resources (however modest) which were simply not in the possession of most men and women.” As Hobsbawm goes on to say, it was a system which:

…trod the unvirtuous, the weak, the sinful (i.e. those who neither made money nor controlled their emotional or financial expenditures) into the mud where they so plainly belonged, deserving at best only of their betters’ charity. There was some capitalist economic sense in this. Small entrepreneurs had to plough back much of their profits into the business if they were to become big entrepreneurs. The masses of new proletarians had to be broken into the industrial rhythm of labour by the most draconic labour discipline, or left to rot if they would not accept it. And yet even today the heart contracts at the sight of the landscape constructed by that generation.

This was the landscape upon which the professions alongside much else of our modern world were constructed. The industrial revolution is often presented in a way that suggests that technical innovations were its main driver, but Hobsbawm shows us that this was not so. As he says:

Fortunately few intellectual refinements were necessary to make the Industrial Revolution. Its technical inventions were exceedingly modest, and in no way beyond the scope of intelligent artisans experimenting in their workshops, or of the constructive capacities of carpenters, millwrights and locksmiths: the flying shuttle, the spinning jenny, the mule. Even its scientifically most sophisticated machine, James Watt’s rotary steam-engine (1784), required no more physics than had been available for the best part of a century—the proper theory of steam engines was only developed ex post facto by the Frenchman Carnot in the 1820s—and could build on several generations of practical employment for steam engines, mostly in mines.

What it did require though was the obliteration of alternatives for the vast majority of people to “the industrial rhythm of labour” and a radical reinvention of the language.

These are not easy things to accomplish which is why we cannot easily imagine the breakdown of late capitalism. However if we focus on AI etc as the drivers of the next industrial revolution, we will probably be missing where the action really is.

I have just been reading Adrian Tchaikovsky’s Service Model. I am sure I will think about it often for years to come.

Imagine a world where “Everything was piles. Piles of bricks and shattered lumps of concrete and twisted rods of rebar. Enough fine-ground fragments of glass to make a whole razory beach. Shards of fragmented plastic like tiny blunted knives. A pall of ashen dust. And, to this very throne of entropy, someone had brought more junk.”

This is Earth outside a few remaining enclaves. And all served by robots, millions of robots.

Robots: like our protagonist (although he would firmly resist such a designation) Uncharles, who has been programmed to be a valet, or gentleman’s gentlerobot; or librarians tasked with preserving as much data from destruction or unauthorised editing as possible; or robots preventing truancy from the Conservation Farm Project where some of the few remaining humans are conscripted to reenact human life before robots; or the fix-it robots; or the warrior robots prosecuting endless wars.

Uncharles, after slitting the throat of his human master for no reason that he can discern, travels this landscape with his hard-to-define-and-impossible to-shut-up companion The Wonk, who is very good at getting into places but often not so good at extracting herself. Until they finally arrive in God’s waiting room and take a number.

Along the way The Wonk attempts to get Uncharles to accept that he has been infected with a Protagonist Virus, which has given Uncharles free will. And Uncharles finds his prognosis routines increasingly unhelpful to him as he struggles to square the world he is perambulating with the internal model of it he carries inside him.

The questions that bounce back between our two unauthorised heroes are many and various, but revolve around:

  1. Is there meaning beyond completing your task list or fulfilling the function for which you were programmed?
  2. What is the purpose of a gentleman’s gentlerobot when there are no gentlemen left?
  3. Is the appearance of emotion in some of Uncharles’ actions and communications really just an increasingly desperate attempt to reduce inefficient levels of processing time? Or is the Protagonist Virus an actual thing?

Ultimately the question is: what is it all for? And when they finally arrive in front of God, the question is thrown back at us, the pile of dead humans rotting across the landscape of all our trash.

This got me thinking about a few things in a different way. One of these was AI.

Suppose AI is half as useful as OpenAI and others are telling us it will be. Suppose that we can do all of these tasks in less than half the time. How is all of that extra time going to be distributed? In 1930 Keynes speculated that his grandchildren would only need to work a 15 hour week. And all of the productivity improvements he assumed in doing so have happened. Yes still full-time work remains the aspiration.

There certainly seems to have been a change of attitude from around 1980 onwards, with those who could choose choosing to work longer, for various reasons which economists are still arguing about, and therefore the hours lost were from those who couldn’t choose, as The Resolution Foundation have pointed out. Unfortunately neither their pay, nor their quality of work, have increased sufficiently for those hours to meet their needs.

So, rather than asking where the hours have gone, it probably makes more sense to ask where the money has gone. And I think we all know the answer to that one.

When Uncharles and The Wonk finally get in to see God, God gives an example of a seat designed to stop vagrants sleeping on it as the indication it needed of the kind of society humans wanted. One where the rich wanted not to have to see or think about the poor. Replacing all human contact with eternally indefatigable and keen-to-serve robots was the world that resulted.

Look at us clever humans, constantly dreaming of ways to increase our efficiency, remove inefficient human interaction, or indeed any interaction which cannot be predicted in advance. Uncharles’ seemingly emotional responses, when he rises above the sea of task-queue-clutching robots all around him, are to what he sees as inefficiency. But what should be the goal? Increasing GDP can’t be it, that is just another means. We are currently working extremely hard and using a huge proportion of news and political affairs airtime and focus on turning the English Channel into the seaborne equivalent of the seat where vagrants and/or migrants cannot rest.

So what should be the goal? Because the reason Service Model will stay with me for some time to come is that it shows us what happens if we don’t have one. The means take over. It seems appropriate to leave the last word to a robot.

“Justice is a human-made thing that means what humans wish it to mean and does not exist at all if humans do not make it,” Uncharles says at one point. “I suggest that ‘kind and ordered’ is a better goal.”

Last time I suggested that the changes to graduate recruitment patterns, due at least in part to technological change, appeared to be to the disadvantage of current graduates, both in terms of number of vacancies and in what they were being asked to do.

This immediately reminds me of the old Woody Allen joke from the opening monologue to Annie Hall:

Two elderly women are at a Catskills mountain resort, and one of ’em says: “Boy, the food at this place is really terrible.” The other one says, “Yeah, I know, and such … small portions.”

This would clearly be an uncomfortable position for Corporate Britain if it were accepted. So a push back is to be expected. The drop in graduate vacancies is hard to challenge so the next candidate is obviously the candidates themselves.

So hot on the heels of “Kids today need more discipline”, “Nobody wants to work”, “Students today aren’t prepared for college”, “Kids today are lazy”, “We are raising a generation of wimps” and “Kids today have too much freedom” (I refer you to Paul Fairie’s excellent collections of newspaper reports through history detailing these findings at regular intervals), we now have the FT, newspaper of choice for Corporate Britain, weighing in on “The Troubling Decline in Conscientiousness“, this time backed up by a whole series of graphs:

John Burn-Murdoch does a lot of great data work on a huge array of subjects which I have referred to often, but I find the quoted studies problematic for a number of reasons. First of all, there is the suspicion that young people have already been found guilty before looking for evidence to back this up. For instance, which came first here the “factors at work” or the “shifts”?

While a full explanation of these shifts requires thorough investigation, and there will be many factors at work, smartphones and streaming services seem likely culprits.

At one point John feels compelled to say:

While the terminology of personality can feel vague, the science is solid.

At which point he links to this study, defending the five-factor model of personality as a “biologically based human universal” which terrifies me a little. Now of course there are always studies pointing in lots of different directions for any piece of social science research and this is no exception. In this critique of the five-factor model (FFM), for instance, we find that:

While the two largest factors (Anxiety/Neuroticism and Extraversion) appear to have been universally accepted (e.g., in the pioneering factor-analytic work of R. B. Cattell, H. J. Eysenck, J. P. Guilford, and A. L. Comrey), the present critique suggests, nevertheless, that the FFM provides a less than optimal account of human personality structure.

I first saw the FT article via a post on LinkedIn, where there was one mild push back sitting alone amongst crowds of pile ons from people of my generation. After all it feels right, doesn’t it? But Chris Wagstaff, Senior Visiting Fellow at Bayes Business School, was spot on I feel, when he pointed out four potential behavioural biases at play here within the organisations where these young people are working:

  1. The decline in conscientiousness and some of the other traits identified could be a consequence of more senior colleagues not inviting or taking on board constructive challenge from younger colleagues, the calamity of conformity, i.e. groupthink, so demotivating the latter.
  2. Related to this is the tendency for many organisations to get their employees to live and breathe an often meaningless set of values and adhere to a blinkered way of doing things. Again, hugely frustrating and demotivating.
  3. Or perhaps we’re seeing way too many meetings being populated by way too many participants, meaning social loafing (ie when individual performance isn’t visible they simply hide behind others) is on the increase.
  4. Finally, remuneration structures might discourage entrepreneurial thinking and an element of risk taking (younger folk are less risk averse than older folk). Again, very demotivating.

These sound much more convincing “factors at play” to me than smart phones or streaming services, neither of which of course are the preserve of the young. But demonising the young is an essential prelude to feeling better about denying them work or forcing them into some kind of reverse centaur position.

Corporate Britain needs to do better than pseudo-scientific victim blaming. There are real issues here around the next generation’s relationship with work and much else which need to be met head on. Your future pension income may depend upon it.