Source: https://pluspng.com/img-png/mixed-economy-png–901.png

Just type “mixed economy graphic” into Google and you will get a lot of diagrams like this one – note that they normally have to pick out the United States for special mention. Notice the big gap between those countries – North Korea, Cuba, China and Russia – and us. It is a political statement masquerading as an economic one.

This same line is used to describe our political options. The Political Compass added an authoritarian/libertarian axis in their 2024 election manifesto analysis but the line from left to right (described as the economic scale) is still there:

Source: https://www.politicalcompass.org/uk2024

So here we are on our political and economic spectrum, where tiny movements between the very clustered Reform, Conservative, Labour and Liberal Democrat positions fill our newspapers and social media comment. The Greens and, presumably if it ever gets off the ground, Your Party are seen as so far away from the cluster that they often get left out of our political discourse. It is an incredibly narrow perspective and we wonder why we are stuck on so many major societal problems.

This is where we have ended up following the “slow singularity” of the Industrial Revolution I talked about in my last post. Our politics coalesced into one gymnasts’ beam, supported by the hastily constructed Late Modern English fashioned for this purpose in the 1800s, along which we have all been dancing ever since, between the market information processors at the “right” end and the bureacratic information processors at the “left” end.

So what does it mean for this arrangement if we suddenly introduce another axis of information processing, ie the large language AI models. I am imagining something like this:

What will this mean for how countries see their economic organisation? What will it mean for our politics?

In 1884, the English theologian, Anglican priest and schoolmaster Edwin Abbott Abbott published a satirical science fiction novella called Flatland: A Romance of Many Dimensions. Abbott’s satire was about the rigidity of Victorian society, depicted as a two-dimensional world inhabited by geometric figures: women are line segments, while men are polygons with various numbers of sides. We are told the story from the viewpoint of a square, which denotes a gentleman or professional. In this world three-dimensional shapes are clearly incomprehensible, with every attempt to introduce new ideas from this extra dimension considered dangerous. Flatland is not prepared to receive “revelations from another world”, as it describes anything existing in the third dimension, which is invisible to them.

The book was not particularly well received and fell into obscurity until it was embraced by mathematicians and physicists in the early 20th century as the concept of spacetime was being developed by Poincaré, Einstein and Minkowski amongst others. And what now looks like a prophetic analysis of the limitations of the gymnasts’ beam economic and political model of the slow singularity has continued to not catch on at all.

However, much as with Brewster’s Millions, the incidence of film adaptations of Flatland give some indication of when it has come back as an idea to some extent. This tells us that it wasn’t until 1965 until someone thought it was a good idea to make a movie of Flatland and then noone else attempted it until an Italian stop-motion film in 1982. There were then two attempts in 2007, which I can’t help but think of as a comment on the developing financial crisis at the time, and a sequel based on Bolland : een roman van gekromde ruimten en uitdijend heelal (which translates as: Sphereland: A Fantasy About Curved Spaces and an Expanding Universe), a 1957 sequel to Flatland in Dutch (which didn’t get translated into English until 1965 when the first animated film came out) by Dionys Burger, in 2012.

So here we are, with a new approach to processing information and language to sit alongside the established processors of the last 200 years or more. Will it perhaps finally be time to abandon Flatland? And if we do, will it solve any of our problems or just create new ones?

In 2017 I posted an article about how the future for actuaries was starting to look, with particular reference to a Society of Actuaries paper by Dodzi Attimu and Bryon Robidoux, which has since been moved to here.

I summarised their paper as follows at the time:

Focusing on…a paper produced by Dodzi Attimu and Bryon Robidoux for the Society of Actuaries in July 2016 explored the theme of robo actuaries, by which they meant software that can perform the role of an actuary. They went on to elaborate as follows:

Though many actuaries would agree certain tasks can and should be automated, we are talking about more than that here. We mean a software system that can more or less autonomously perform the following activities: develop products, set assumptions, build models based on product and general risk specifications, develop and recommend investment and hedging strategies, generate memos to senior management, etc.

They then went on to define a robo actuarial analyst as:

A system that has limited cognitive abilities but can undertake specialized activities, e.g. perform the heavy lifting in model building (once the specification/configuration is created), perform portfolio optimization, generate reports including narratives (e.g. memos) based on data analysis, etc. When it comes to introducing AI to the actuarial profession, we believe the robo actuarial analyst would constitute the first wave and the robo actuary the second wave.

They estimate that the first wave is 5 to 10 years away and the second 15 to 20 years away. We have been warned.

So 9 years on from their paper, how are things looking? Well the robo actuarial analyst wave certainly seems to be pretty much here, particularly now that large language models like ChatGPT are being increasingly used to generate reports. It suddenly looks a lot less fanciful to assume that the full robo actuary is less than 11 years away.

But now the debate on AI appears to be shifting to an argument between whether we are heading for Vernor Vinge’s “Singularity” where the increasingly capable systems

would not be humankind’s “tool” — any more than humans are the tools of rabbits or robins or chimpanzees

on the one hand, and, on the other, the idea that “it is going to take a long time for us to really use AI properly…, because of how hard it is to regear processes and organizations around new tech”.

In his article on Understanding AI as a social technology, Henry Farrell suggests that neither of these positions allow a proper understanding of the impact AI is likely to have, instead proposing the really interesting idea that we are already part way through a “slow singularity”, which began with the industrial revolution. As he puts it:

Under this understanding, great technological changes and great social changes are inseparable from each other. The reason why implementing normal technology is that so slow is that it requires sometimes profound social and economic transformations, and involves enormous political struggle over which kinds of transformation ought happen, which ought not, and to whose benefit.

This chimes with what I was saying recently about AI possibly not being the best place to look for the next industrial revolution. Farrell plausibly describes the current period using the words of Herbert Simon. As Farrell says: “Human beings have quite limited internal ability to process information, and confront an unpredictable and complex world. Hence, they rely on a variety of external arrangements that do much of their information processing for them.” So Simon says of markets, for instance, which:

appear to conserve information and calculation by assigning decisions to actors who can make them on the basis of information that is available to them locally – that is, without knowing much about the rest of the economy apart from the prices and properties of the goods they are purchasing and the costs of the goods they are producing.

And bureaucracies and business organisations, similarly:

like markets, are vast distributed computers whose decision processes are substantially decentralized. … [although none] of the theories of optimality in resource allocation that are provable for ideal competitive markets can be proved for hierarchy, … this does not mean that real organizations operate inefficiently as compared to real markets. … Uncertainty often persuades social systems to use hierarchy rather than markets in making decisions.

Large language models by this analysis are then just another form of complex information processing, “likely to reshape the ways in which human beings construct shared knowledge and act upon it, with their own particular advantages and disadvantages. However, they act on different kinds of knowledge than markets and hierarchies”. As an Economist article Farrell co-wrote with Cosma Shalizi says:

We now have a technology that does for written and pictured culture what largescale markets do for the economy, what large-scale bureaucracy does for society, and perhaps even comparable with what print once did for language. What happens next?

Some suggestions follow and I strongly recommend you read the whole thing. However, if we return to what I and others were saying in 2016 and 2017, it may be that we were asking the wrong question. Perhaps the big changes of behaviour required of us to operate as economic beings have already happened (the start of the “slow singularity” of the industrial revolution) and the removal of alternatives that required us to spend increasing proportions of our time within and interacting with bureacracies and other large organisations were the logical appendage to that process. These processes are merely becoming more advanced rather than changing fundamentally in form.

And the third part, ie language? What started with the emergence of Late Modern English in the 1800s looks like it is now being accelerated via a new way of complex information processing applied to written, pictured (and I would say also heard) culture.

So the future then becomes something not driven by technology, but by our decisions about which processes we want to allow or even encourage and which we don’t, whether those are market processes, organisational processes or large language processes. We don’t have to have robo actuaries or even robo actuarial analysts, but we do have to make some decisions.

And students entering this arena need to prepare themselves to be participants in those decisions rather than just victims of them. A subject I will be returning to.

Title page vignette of Hard Times by Charles Dickens. Thomas Gradgrind Apprehends His Children Louisa and Tom at the Circus, 1870

It was Fredric Jameson (according to Owen Hatherley in the New Statesman) who first said:

“It seems to be easier for us today to imagine the thoroughgoing deterioration of the earth and of nature than the breakdown of late capitalism”. I was reminded of this by my reading this week.

It all started when I began watching Shifty, Adam Curtis’ latest set of films on iPlayer aiming to convey a sense of shifting power structures and where they might lead. Alongside the startling revelation that The Land of Make Believe by Bucks Fizz was written as an anti-Thatcher protest song, there was a short clip of Eric Hobsbawm talking about all of the words which needed to be invented in the late 18th century and early 19th to allow people to discuss the rise of capitalism and its implications. So I picked up a copy of his The Age of Revolution 1789-1848 to look into this a little further.

The first chapter of Hobsbawm’s introduction from 1962, the year of my birth, expanded on the list:

Words are witnesses which often speak louder than documents. Let us consider a few English words which were invented, or gained their modern meanings, substantially in the period of sixty years with
which this volume deals. They are such words as ‘industry’, ‘industrialist’, ‘factory’, ‘middle class’, ‘working class’, ‘capitalism’ and ‘socialism’. They include ‘aristocracy’ as well as ‘railway’, ‘liberal’ and
‘conservative’ as political terms, ‘nationality’, ‘scientist’ and ‘engineer’, ‘proletariat’ and (economic) ‘crisis’. ‘Utilitarian’ and ‘statistics’, ‘sociology’ and several other names of modern sciences, ‘journalism’ and ‘ideology’, are all coinages or adaptations of this period. So is ‘strike’ and ‘pauperism’.

What is striking about these words is how they frame most of our economic and political discussions still. The term “middle class” originated in 1812. Noone referred to an “industrial revolution” until English and French socialists did in the 1820s, despite what it described having been in progress since at least the 1780s.

Today the founder of the World Economic Forum has coined the phrase “Fourth Industrial Revolution” or 4IR or Industry 4.0 for those who prefer something snappier. Its blurb is positively messianic:

The Fourth Industrial Revolution represents a fundamental change in the way we live, work and relate to one another. It is a new chapter in human development, enabled by extraordinary technology advances commensurate with those of the first, second and third industrial revolutions. These advances are merging the physical, digital and biological worlds in ways that create both huge promise and potential peril. The speed, breadth and depth of this revolution is forcing us to rethink how countries develop, how organisations create value and even what it means to be human. The Fourth Industrial Revolution is about more than just technology-driven change; it is an opportunity to help everyone, including leaders, policy-makers and people from all income groups and nations, to harness converging technologies in order to create an inclusive, human-centred future. The real opportunity is to look beyond technology, and find ways to give the greatest number of people the ability to positively impact their families, organisations and communities.

Note that, despite the slight concession in the last couple of sentences that an industrial revolution is about more then technology-driven change, they are clear that the technology is the main thing. It is also confused: is the future they see one in which “technology advances merge the physical, digital and biological worlds” to such an extent that we have “to rethink” what it “means to be human”? Or are we creating an “inclusive, human-centred future”?

Hobsbawm describes why utilitarianism (” the greatest happiness of the greatest number”) never really took off amongst the newly created middle class, who rejected Hobbes in favour of Locke because “he at least put private property beyond the range of interference and attack as the most basic of ‘natural rights'”, whereas Hobbes would have seen it as just another form of utility. This then led to this natural order of property ownership being woven into the reassuring (for property owners) political economy of Adam Smith and the natural social order arising from “sovereign individuals of a certain psychological constitution pursuing their self-interest in competition with one another”. This was of course the underpinning theory of capitalism.

Hobsbawm then describes the society of Britain in the 1840s in the following terms:

A pietistic protestantism, rigid, self-righteous, unintellectual, obsessed with puritan morality to the point where hypocrisy was its automatic companion, dominated this desolate epoch.

In 1851 access to the professions in Britain was extremely limited, requiring long years of education to support oneself through and opportunities to do so which were rare. There were 16,000 lawyers (not counting judges) but only 1,700 law students. There were 17,000 physicians and surgeons and 3,500 medical students and assistants. The UK population in 1851 was around 27 million. Compare these numbers to the relatively tiny actuarial profession in the UK today, with around 19,000 members overall in the UK.

The only real opening to the professions for many was therefore teaching. In Britain “76,000 men and women in 1851 described themselves as schoolmasters/mistresses or general teachers, not to mention the 20,000 or so governesses, the well-known last resource of penniless educated girls unable or unwilling to earn their living in less respectable ways”.

Admittedly most professions were only just establishing themselves in the 1840s. My own, despite actuarial activity getting off the ground in earnest with Edmund Halley’s demonstration of how the terms of the English Government’s life annuities issue of 1692 were more generous than it realised, did not form the Institute of Actuaries (now part of the Institute and Faculty of Actuaries) until 1848. The Pharmaceutical Society of Great Britain (now the Royal Pharmaceutical Society) was formed in 1841. The Royal College of Veterinary Surgeons was established by royal charter in 1844. The Royal Institute of British Architects (RIBA) was founded in 1834. The Society of Telegraph Engineers, later the Institute of Electrical Engineers (now part of the Institute of Engineering and Technology), was formed in 1871. The Edinburgh Society of Accountants and the Glasgow Institute of Accountants and Actuaries were granted royal charters in the mid 1850s, before England’s various accounting institutes merged into the Institute of Chartered Accountants in England and Wales in 1880.

However “for every man who moved up into the business classes, a greater number necessarily moved down. In the second place economic independence required technical qualifications, attitudes of mind, or financial resources (however modest) which were simply not in the possession of most men and women.” As Hobsbawm goes on to say, it was a system which:

…trod the unvirtuous, the weak, the sinful (i.e. those who neither made money nor controlled their emotional or financial expenditures) into the mud where they so plainly belonged, deserving at best only of their betters’ charity. There was some capitalist economic sense in this. Small entrepreneurs had to plough back much of their profits into the business if they were to become big entrepreneurs. The masses of new proletarians had to be broken into the industrial rhythm of labour by the most draconic labour discipline, or left to rot if they would not accept it. And yet even today the heart contracts at the sight of the landscape constructed by that generation.

This was the landscape upon which the professions alongside much else of our modern world were constructed. The industrial revolution is often presented in a way that suggests that technical innovations were its main driver, but Hobsbawm shows us that this was not so. As he says:

Fortunately few intellectual refinements were necessary to make the Industrial Revolution. Its technical inventions were exceedingly modest, and in no way beyond the scope of intelligent artisans experimenting in their workshops, or of the constructive capacities of carpenters, millwrights and locksmiths: the flying shuttle, the spinning jenny, the mule. Even its scientifically most sophisticated machine, James Watt’s rotary steam-engine (1784), required no more physics than had been available for the best part of a century—the proper theory of steam engines was only developed ex post facto by the Frenchman Carnot in the 1820s—and could build on several generations of practical employment for steam engines, mostly in mines.

What it did require though was the obliteration of alternatives for the vast majority of people to “the industrial rhythm of labour” and a radical reinvention of the language.

These are not easy things to accomplish which is why we cannot easily imagine the breakdown of late capitalism. However if we focus on AI etc as the drivers of the next industrial revolution, we will probably be missing where the action really is.

I have just been reading Adrian Tchaikovsky’s Service Model. I am sure I will think about it often for years to come.

Imagine a world where “Everything was piles. Piles of bricks and shattered lumps of concrete and twisted rods of rebar. Enough fine-ground fragments of glass to make a whole razory beach. Shards of fragmented plastic like tiny blunted knives. A pall of ashen dust. And, to this very throne of entropy, someone had brought more junk.”

This is Earth outside a few remaining enclaves. And all served by robots, millions of robots.

Robots: like our protagonist (although he would firmly resist such a designation) Uncharles, who has been programmed to be a valet, or gentleman’s gentlerobot; or librarians tasked with preserving as much data from destruction or unauthorised editing as possible; or robots preventing truancy from the Conservation Farm Project where some of the few remaining humans are conscripted to reenact human life before robots; or the fix-it robots; or the warrior robots prosecuting endless wars.

Uncharles, after slitting the throat of his human master for no reason that he can discern, travels this landscape with his hard-to-define-and-impossible to-shut-up companion The Wonk, who is very good at getting into places but often not so good at extracting herself. Until they finally arrive in God’s waiting room and take a number.

Along the way The Wonk attempts to get Uncharles to accept that he has been infected with a Protagonist Virus, which has given Uncharles free will. And Uncharles finds his prognosis routines increasingly unhelpful to him as he struggles to square the world he is perambulating with the internal model of it he carries inside him.

The questions that bounce back between our two unauthorised heroes are many and various, but revolve around:

  1. Is there meaning beyond completing your task list or fulfilling the function for which you were programmed?
  2. What is the purpose of a gentleman’s gentlerobot when there are no gentlemen left?
  3. Is the appearance of emotion in some of Uncharles’ actions and communications really just an increasingly desperate attempt to reduce inefficient levels of processing time? Or is the Protagonist Virus an actual thing?

Ultimately the question is: what is it all for? And when they finally arrive in front of God, the question is thrown back at us, the pile of dead humans rotting across the landscape of all our trash.

This got me thinking about a few things in a different way. One of these was AI.

Suppose AI is half as useful as OpenAI and others are telling us it will be. Suppose that we can do all of these tasks in less than half the time. How is all of that extra time going to be distributed? In 1930 Keynes speculated that his grandchildren would only need to work a 15 hour week. And all of the productivity improvements he assumed in doing so have happened. Yes still full-time work remains the aspiration.

There certainly seems to have been a change of attitude from around 1980 onwards, with those who could choose choosing to work longer, for various reasons which economists are still arguing about, and therefore the hours lost were from those who couldn’t choose, as The Resolution Foundation have pointed out. Unfortunately neither their pay, nor their quality of work, have increased sufficiently for those hours to meet their needs.

So, rather than asking where the hours have gone, it probably makes more sense to ask where the money has gone. And I think we all know the answer to that one.

When Uncharles and The Wonk finally get in to see God, God gives an example of a seat designed to stop vagrants sleeping on it as the indication it needed of the kind of society humans wanted. One where the rich wanted not to have to see or think about the poor. Replacing all human contact with eternally indefatigable and keen-to-serve robots was the world that resulted.

Look at us clever humans, constantly dreaming of ways to increase our efficiency, remove inefficient human interaction, or indeed any interaction which cannot be predicted in advance. Uncharles’ seemingly emotional responses, when he rises above the sea of task-queue-clutching robots all around him, are to what he sees as inefficiency. But what should be the goal? Increasing GDP can’t be it, that is just another means. We are currently working extremely hard and using a huge proportion of news and political affairs airtime and focus on turning the English Channel into the seaborne equivalent of the seat where vagrants and/or migrants cannot rest.

So what should be the goal? Because the reason Service Model will stay with me for some time to come is that it shows us what happens if we don’t have one. The means take over. It seems appropriate to leave the last word to a robot.

“Justice is a human-made thing that means what humans wish it to mean and does not exist at all if humans do not make it,” Uncharles says at one point. “I suggest that ‘kind and ordered’ is a better goal.”