
I would like to start this week’s post with a quote from Carlo Iacono, from a Substack piece he did a couple of weeks ago called The Questions Nobody Is Funding:
What is a human being for? What do we owe the future? What remains worth the difficulty of learning?
These are not questions you will find in the OECD’s AI Literacy Framework. They are not addressed in the World Economic Forum’s Education 4.0 agenda. They do not appear in the competency matrices cascading through national education systems. Instead, we get learning objectives and assessment criteria. Employability outcomes and digital capabilities. The language of preparation, as if the future were already decided and our job were simply to ready people for it.
I think this articulates well the central challenge of AI for education. Whether you think this is the beginning of a future where augmented humans move into a different type of existence to any we have known before; or you believe very little will be left behind in the rubble from the inevitable burst of the AI bubble when it comes and will be, at least temporarily, forgotten in the most devastating stock market crash and depression for a century; or you hold both these beliefs at the same time; or you are somewhere in between, it is difficult to see how the orderly world of competency matrices, learning objectives, assessment criteria, employability outcomes and digital capabilities can easily survive the period of technological, cultural, economic and political disruption which we appear to have entered. Looking in the rear view mirror and trying to extrapolate what you see into the future is not going to work for us any more.
Whether you think, like Cory Doctorow, in his recent speech at the University of Washington called The Reverse Centaur’s Guide to Criticizing AI, that:
AI is the asbestos in the walls of our technological society, stuffed there with wild abandon by a finance sector and tech monopolists run amok. We will be excavating it for a generation or more.
Or you think, as Henry Farrell has suggested in another article called Large Language Models As The Tales That Are Sung:
Technologies such as LLMs are neither going to transcend humanity as the holdouts on one side still hope, nor disappear, as other holdouts might like. We’re going to have to figure out ways to talk about them better and more clearly.
We are certainly going to have to figure out ways to talk about LLMs and other forms of AI more clearly, so that the decisions we need to make about how to accommodate them into society can be made with the maximum level of participation and consensus. And this seems to be the key for me with respect to education too. We do need people graduating from our education system understanding clearly what LLMs can and cannot do, which is a tricky path to navigate at the moment as a lot of money is being concentrated on persuading you that it can do pretty much anything. One example here has created a writers’ room of four LLMs where they are asked to critique each other by pushing the output from one into the prompts for the others, reminiscent of The Human Centipede. Which immediatel reminded me of this take from later in that Cory Doctorow speech:
And I’ll never forget when one writer turned to me and said, “You know, you prompt an LLM exactly the same way an exec gives shitty notes to a writers’ room. You know: ‘Make me ET, except it’s about a dog, and put a love interest in there, and a car chase in the second act.’ The difference is, you say that to a writers’ room and they all make fun of you and call you a fucking idiot suit. But you say it to an LLM and it will cheerfully shit out a terrible script that conforms exactly to that spec (you know, Air Bud).”
So, back to Carlo’s little questions:
What is a human being for?
A lofty question certainly, and not one I am going to tackle in a blog post. But perhaps I can say a bit about what a human being is not for. This is the key to Henry Farrell’s piece which is his take on the humanist critique of AI. We are presumably primarily designing the future for humans. All humans. Not just Tech Bros. And the design needs to bear that in mind. For example, a human being is not, in my opinion, for this (from the Cory Doctorow link):
Like an Amazon delivery driver, who sits in a cabin surrounded by AI cameras, that monitor the driver’s eyes and take points off if the driver looks in a proscribed direction, and monitors the driver’s mouth because singing isn’t allowed on the job, and rats the driver out to the boss if they don’t make quota.
The driver is in that van because the van can’t drive itself and can’t get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance. But the driver is human, so the van doesn’t just use the driver. The van uses the driver up.
The first task of the education establishment, I think, is to attempt to protect the graduate from becoming the reverse-centaur described above, whether a deliver driver, a coder (where additionally the human-in-the-loop becomes the accountability sink for everything the AI gets wrong) or a radiologist. This will often be resisted by the employers you are currently very sensitive to the needs of as educators (many of who are senior enough to get to use the new technologies as a centaur rather than be used by them as a reverse-centaur, tend to struggle to put themselves in anyone else’s shoes and, frankly, can’t see what all the fuss is about) but, remember, the cosy world of employability outcomes is over. The employers are not sticking to the implicit agreement to employ your graduates if you delivered the outcomes and therefore neither should you. Your responsibility in education is to the students, not their potential future employers, now their interests no longer appear to be aligned.
What do we owe the future?
This depends on what you mean by “the future” of course. If it is some technological dystopia of diminished opportunities for most (even for making friends as seemingly envisioned by some of the top Tech Bros), then nothing at all. But if it is the future which is going to support your children and their children, you obviously owe it a lot. But what do you owe it? What is owed is often converted into money by the political right, and used to justify not running up public debt in the present so as not to “impoverish” future generations. What that approach generally achieves is to impoverish both the current and future generations.
But if you think of owing resources, institutions and infrastructure to the next generation, then that is a responsibility that we should take seriously. And part of that is to produce an educated generation with tools, systems, institutions and infrastructure. The education institutions must take steps to make sure they survive in a relevant way, embedded in systems which support individuals and proselytising the value of education for all. They must ensure that their graduates understand and have facility with the essential tools they will need, and have developed the ability to learn new skills as they need them, and realise when that is. This is about developing individuals who leave no longer dependent on the institutions, able to work things out for themselves rather than requiring never-ending education inside an institution.
What remains worth the difficulty of learning?
The skills already mentioned will be the core ones for everyone, and these will need to be hammered out in terms everyone can understand. But in the world of post scarcity education, which is here but which we have not yet fully embraced, the rest will be up to us. A large part of the education of the future will need to be about equipping us all to understand what we now have access to and when and how to access it. We will all have different things we are interested in, or end up involved with and needing to be educated about. It will be up to each of us to decide which things are worth the difficulty of learning, but to make those decisions we will need education that can support the development of judgement.
For education institutions, the question will be what is not worth the difficulty of learning? Credentialising based on now relatively meaningless assessment methods will not cut it. This is where the confrontation with employers and politicians is likely to come. Essential skills and their related knowledge will be better developed and assessed via more open-ended project work and online assessment of it to check understanding. These will need to become the norm, with written examinations becoming less and less prevalent. Not because of fear of cheating and plagiarism, but because an outcome which can be replicated that easily by AI is not worth assessing in the first place.
As William Gibson apparently said at some point in 1992:
“The future has arrived — it’s just not evenly distributed yet.”
The future of education will be the distribution problem.







































