
To be read to the soundtrack of Bruce Springsteen singing Streets of Minneapolis.
My attention was drawn this week to an article by Dario Amodei, co-founder of Anthropic (a spin off from OpenAI, which was co-founded by Elon Musk and heavily invested in by Microsoft so very much part of the Magnificent 7 architecture), the creator of the large language model Claude, called The Adolescence of Technology. It is hard to overemphasise how much I disagree with everything Dario has written here, but also useful in that it is a long article, which covers a lot of ground, and allows me to define my views in opposition to it.
The irritations start pretty much straight away. So Dario quotes from a science fiction classic (Carl Sagan’s First Contact), but then follows this up under the heading of “Avoid doomerism” with this:
…but it’s my impression that during the peak of worries about AI risk in 2023–2024, some of the least sensible voices rose to the top, often through sensationalistic social media accounts. These voices used off-putting language reminiscent of religion or science fiction, and called for extreme actions without having the evidence that would justify them.
Notice the word “sensible” doing the heavy lifting there. Only science fiction endorsed by Dario will be considered. Dario wants us to consider the risks of AI in “a careful and well-considered manner”, which sounds reasonable, but then his 3rd and final bullet under this (after “avoid doomerism” and “acknowledge uncertainty”) goes as follows:
Intervene as surgically as possible. Addressing the risks of AI will require a mix of voluntary actions taken by companies (and private third-party actors) and actions taken by governments that bind everyone. The voluntary actions—both taking them and encouraging other companies to follow suit—are a no-brainer for me. I firmly believe that government actions will also be required to some extent, but these interventions are different in character because they can potentially destroy economic value or coerce unwilling actors who are skeptical of these risks (and there is some chance they are right!).
So reflexively anti regulation of his own industry, of course. And voluntary actions by corporations, an approach to solving problems which has been demonstrated not to work repeatedly, is apparently “a no-brainer”. Also it is automatically assumed that government actions will destroy value. Only market solutions will be endorsed by Dario, pretty much until they have messed up so badly you are forced to bring governments in:
To be clear, I think there’s a decent chance we eventually reach a point where much more significant action is warranted, but that will depend on stronger evidence of imminent, concrete danger than we have today, as well as enough specificity about the danger to formulate rules that have a chance of addressing it. The most constructive thing we can do today is advocate for limited rules while we learn whether or not there is evidence to support stronger ones.
There is then the expected sales pitch about what he has seen within Anthropic about the relentless “increase in AI’s cognitive capabilities”. And then the man who warned about sensationalist science fiction is off:
I think the best way to get a handle on the risks of AI is to ask the following question: suppose a literal “country of geniuses” were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist.
And the rest of the article is then off solving this imaginary problem in all its facets, rather than the wealth and power concentration problem that we actually have. The only legislation he seems to be in favour of seems to be something called “transparency legislation”, legislation which of course Anthropic would help to write.
However, after suggesting everything from isolating China and using “AI to empower democracies to resist autocracies” to private philanthropy as the solutions to his imagined problems, Dario finally and reluctantly concludes government intervention might after all be necessary as follows:
…ultimately a macroeconomic problem this large will require government intervention. The natural policy response to an enormous economic pie coupled with high inequality (due to a lack of jobs, or poorly paid jobs, for many) is progressive taxation. The tax could be general or could be targeted against AI companies in particular. Obviously tax design is complicated, and there are many ways for it to go wrong. I don’t support poorly designed tax policies. I think the extreme levels of inequality predicted in this essay justify a more robust tax policy on basic moral grounds, but I can also make a pragmatic argument to the world’s billionaires that it’s in their interest to support a good version of it: if they don’t support a good version, they’ll inevitably get a bad version designed by a mob.
That, by the way, is what Dario thinks of democracy: “a bad version designed by a mob” rather than the “good version” that he and his fellow billionaires could come up with in their own self interest. The mask has really slipped by this point. And the following section, on “Economic concentration of power”, just demonstrates that he has no effective answers at all that he deems acceptable on this. It’s just an inevitability for him.
This is what Luke Kemp’s excellent Goliath’s Curse refers to as a “Silicon Goliath”. Goliaths are dominance hierarchies which spread by dominating the areas around them. They need three conditions (which Luke calls “Goliath fuel”): lootable resources (ie resources which can be easily stolen off someone else), caged land (ie land difficult to escape from) and monopolizable weapons (ie ones which require processes which can be developed to give one society an edge over another). We are all Goliath-dwellers in “The West” now, looting resources from other countries in unequal exchanges which impoverish the Global South, with weapons (eg nuclear weapons) available only to the elite few countries and operating within the cages of heavily-policed national boundaries. The Silicon Goliath which is developing will have data as its lootable resource, mass surveillance systems providing its cages and monopolizable weapons such as killer drones. The resultant killbot hellscapes which people like Dario Amodei laughably imagine they have defences against through things like their Claude’s Constitution are almost pitiful in their inadequacy.
Nate Hagens takes Dario’s claims for AI’s cognitive capabilities much more seriously than me, and then considers the risks in a less adolescent way here. As he says:
And here’s what his essay has almost nothing about. Energy, water, materials, or ecological limits.
And also nowhere does Dario talk about the 99% of people who are just spectators in his world, other than to describe them as “the mob”. This is quite a blind spot, as Luke Kemp points out in his exhaustive study of the collapses of “Goliaths” over the last 5,000 years. “The extreme levels of inequality” predicted by Amodei in his essay are not just things we have to put up with, but the reasons the world he predicts is likely to be hugely unstable. Not created by AI, but accelerated by it. Kemp describes it as “diminishing returns on extraction”:
We see a pattern re-emerging across case studies. Societies grow more fragile over time and more prone to collapse. Threats that they had always faced such as invaders, disease and drought seem to take a heavier toll.
As societies grew bigger:
They still faced the underlying (and ongoing) problem of rising inequality creating societies where and institutions more extractive power was more concentrated.
And eventually:
The result is more extractive institutions creating growing instability, internal conflict, a drain of resources away from government, state capture by private elites, and worse decision-making. Society – especially the state – becomes more fragile. Private elites tend to take a larger share of extractive benefits. The state, and many of the power structures it helps prop up, then usually falls apart once a shock hits: for Rome it was climate change, disease, and rebelling Germanic mercenaries; for China it was often floods, droughts, disease and horseback raiders; for the west African kingdoms it was invaders and a loss of trade; for the Maya it was drought and a loss of trade; and for the Bronze Age it was drought, a disruption of trade and an earthquake storm.
The only real answer to combatting existential risks in the hands of adolescents like the Tech Bros is more democracy: over control of decision-making, over control of resources, over control of the threat of violence and over control of information. We are a long way from achieving these within our own particular Goliath at the moment, and indeed there is no sign at all that our elites are interested in achieving them. The Magnificent 7 are propping up the US stock exchange. The promise of perpetual economic growth is the progress myth of our time and leaders who do not provide it will lose the “Mandate of Heaven” in just the same way as Chinese rulers did when they were unable to prevent floods and droughts. Adam Tooze sees the signs of the inner demons of our elites starting to detach them from reality in the latest disclosures from the Epstein files:
Are we, like [Larry] Summers, fantasizing about stabilizing our desires and needs in an inherently dangerous and uncertain world? Are we kidding ourselves?
But, without those controls in place, we would need a lot more than Dario’s Anthropic playing nicely to allow this particular adolescent to grow up. And this is where I am forced to take Nate Hagens’ assessment more seriously. Because if our rulers’ Mandates of Heaven are dependent on eternal economic growth on their watch and they, rightly, think that this is not possible in our current non-AI-enhanced world but, wrongly, think it is possible in a future AI-enhanced world, then that is the way they are going to demand we go. And, if the Larry Summers fantasists really are kidding themselves, it may be very hard to talk them out of it.




























