
A couple of weeks ago I wanted to find an article I had written about heat pumps to check something. So I Googled weknow0 and heat pump. This did give me the article, from December 2022, I was after, but also an “AI overview” that I hadn’t requested. The above is what it told me.
Now this is inaccurate on a number of counts. Firstly, I have published 226 articles over the more than 12 years I have been writing on weknow0.co.uk and I have only mentioned heat pumps in two of these. These articles did focus on the points mentioned in 3 of the 4 bullet points above and in one of them I also set out how the market at the time (December 2022) was stacked against anyone acquiring a heat pump, a state of affairs which has thankfully improved considerably since. However to claim that my blog “provides a consumer-focused perspective in the practicalities and challenges of domestic heat pump adoption in the UK” is clearly hilarious.
In fact anyone seeing that would assume I talked about little other than heat pumps, so I decided to do a search on something else that I talk about infrequently and see what I got (I searched “weknow0 science fiction”):

This seems a considerably better summary of the recent activity on the blog, which is also unrecognisable as the blog summarised in response to the previous search.
Right at the end, it suggests a reason for the title of the blog which isn’t an unreasonable guess from a regular reader. But guess it still is, and it does not appear to have processed the significant number of blog posts with variants of we know zero in the title to fine tune its take.
So someone using the AI overview as a research tool would get a completely different view of what the blog was about depending upon which other word they used alongside weknow0. Perhaps that doesn’t matter too much to anyone other than me in this case, but it is part of a broader issue. It is not summarising the website it is suggesting it is summarising.
Of course many of you will now be shouting at me that I need to give the system more focused prompts. There is now a whole area of expertise, lectured in and written about at considerable length, called “prompt engineering”. There are senior professionals who have rarely given their juniors the time of day for years, giving the tersest responses to their completely reasonable queries about the barely intelligible instructions they have given for a piece of work, suddenly prepared to spend hours and hours on prompt engineering so that the Metal Mickey in their phone or laptop can give them responses closer to what they were actually looking for.
At this point, perhaps we should perhaps hear from Sundar Pichai, the Google CEO:
As part of Faisal Islam’s slightly gushing interview with Pichai, we learn that the AI overview on Google is “prone to errors” and needs to be used alongside such things as Google search. “Use them for what they are good at but don’t blindly trust them” he says of his tools which he admits to currently investing $90 billion a year in. This is of course a problem, as one of the reasons people are reluctantly resorting to the AI overview is because the basic Google search has become so enshittified.
And that kind of echoes what Cory Doctorow has said about Google. Google need to maintain a narrative about growth. You will have picked this up if you watched the Pichai interview above, from the breathless stuff about “one of the most powerful men in the world” “perhaps being one of the easier things for AI to replicate one day” to:
You don’t want to constrain an economy based on energy. That will have consequences.
To the even more breathless stuff about us being 5 years from quantum computing being where generative AI is now.
The reason for all the growth talk, according to Doctorow, is that Google needs to be growing for it to be able to maintain a price earnings ratio of 20 to 1, rather than the more typical 4 to 1 of a mature business. So it’s all about the share price. As Doctorow says:
Which is why Google is so desperately sweaty to maintain the narrative about its growth. That’s a difficult narrative to maintain, though. Google has 90% Search market-share, and nothing short of raising a billion humans to maturity and training them to be Google users (AKA “Google Classroom”) will produce any growth in its Search market-share. Google is so desperate to juice its search revenue that it actually made search worse on purpose so that you would have to run multiple searches (and see multiple rounds of ads) before you got the information you were seeking.
Investors have metabolized the story that AI will be a gigantic growth area, and so all the tech giants are in a battle to prove to investors that they will dominate AI as they dominated their own niches. You aren’t the target for AI, investors are: if they can be convinced that Google’s 90% Search market share will soon be joined by a 90% AI market share, they will continue to treat this decidedly tired and run-down company like a prize racehorse at the starting-gate.
This is why you are so often tricked into using AI, by accidentally grazing a part of your screen with a fingertip, summoning up a pestersome chatbot that requires six taps and ten seconds to banish: companies like Google have made their product teams’ bonuses contingent on getting normies to “use” AI and “use” is defined as “interact with AI for at least ten seconds.” Goodhart’s Law (“any metric becomes a target”) has turned every product you use into a trap for the unwary.
So here we are. AI isn’t meant for most of you, its results are “prone to errors” and need to be used alongside other corroborating material or “human validation”. It needs you to take a course in prompt engineering even if you never did the same to manage any of your human staff. It is primarily designed to persuade investors to keep the share price up to the levels the Board of Alphabet Inc have become accustomed to.