I have just started reading an excellent new book by Cathy O’Neil, Weapons of Math Destruction, where she sets out the case against devolving important decisions to mathematical models without adequate feedback loops. The opening example she gives, of a teacher fired from her job in Washington because a school feeding students into her school was manipulating test scores, makes her general point very well. I am looking forward to the chapter on insurance.
However, the thought that this raised in my mind is how, increasingly, we don’t even need the algorithms and mathematical models to behave in a robotic fashion – we constantly follow rules set by others rather than using our own judgement. Indeed regulators push us more and more in this direction. They are nearly all under-staffed and over-tasked and need shortcuts to manage their workloads. And the most obvious shortcut is to focus on the very big and the very different. The very big are normally very much better staffed than the regulators and difficult to win arguments with. Regulators are therefore left with the very different. So regulators make life for the very different very difficult. And, before long, the very different no longer exist and the systemic risk in your population of banks, schools, hospitals or whatever it is has increased.
What O’Neill rightly focuses on as the main danger of widely used models is their lack of a feedback loop. If nothing tells you when your model is not reflecting the world it is modelling, it will not be long before it is doing a great deal more harm than good. And when a regulators uses a model which does not rely on inputs from the system it is regulating, then the model becomes the world it is regulating, often with bizarre consequences. One of the main reasons that the 2008 crash was so dramatic is that so little was done to prevent it even as warning signs grew. This was because, in the model used to regulate banks, these warning signs didn’t exist. Unfortunately and worryingly for us, the lessons learned, in the main, were not that model-led regulation was bad, but that the models used just needed to be more complicated.
Every call for what is being regulated to be simplified (by breaking them up into smaller units, in the case of banks, or simplifying the regulations themselves, rather than the regulatory arms race of measure and unintended loophole we seem doomed to keep repeating) has been resisted, resulting in a regulatory framework which becomes more bafflingly complex with every passing year. This is a process recognised in Government and there have been occasional attempts to reverse the tide. To date, with little effect.
And the regulators themselves? They are cash-strapped and at the mercy of inconsistent Government policy. So we have CQC inspectors (of NHS Trusts) and Ofsted inspectors (of schools and colleges) making short visits, producing reports based on anything anyone has said to them on these short visits, allowing for no factual corrections, subject to no cross examination, just to get through their caseloads without causing headaches for their political masters, but often resulting in inconsistent scrutiny or, in some cases, in abrupt reversals in conclusions in successive inspections. At the same time, it appears clear that you can evade your financial responsibilities almost completely if you are rich and unscrupulous enough. Our regulatory systems have proved unreliable in too many areas and have in my view lost credibility as a result.
So where do actuaries fit in to all this? They are one of the professions centrally involved in building, updating and interpreting models across a wide range of financial firms. Everything from the amount a firm makes in pension contributions, to the amount held in reserve by an insurer or a bank, to the detailed agreements in a corporate restructure, often involving staggering sums of money. This work cannot be carried out effectively by playing to an unreliable regulator.
Upon accepting the Army-Navy Excellence Award on November 16, 1945, Robert Oppenheimer, who ran the US Government’s Manhattan Project in Los Alamos to develop the world’s first nuclear weapons, proclaimed: “If atomic bombs are to be added as new weapons to the arsenals of a warring world, or to the arsenals of the nations preparing for war, then the time will come when mankind will curse the names of Los Alamos and Hiroshima.” He realised that responsibility for the use to which your work is put can never be wholly given away to someone else.
If mathematical models are to be the dominant regulatory tool of a financial world, and of the consultancies and financial firms competing in that world, then the time will come when mankind will curse the names of the highly paid professionals who followed inappropriate rules rather than exercising their own expert judgement when it mattered.