Land mines and bank bailouts

scan0005

 

A man is sentenced to 7 years in prison for selling bomb detectors which had no hope of detecting bombs. The contrast with the fate of those who have continued to sell complex mathematical models to both large financial institutions and their regulators over 20 years, which have no hope of protecting them from massive losses at the precise point when they are required, is illuminating.

The devices made by Gary Bolton were simply boxes with handles and antennae. The “black boxes” used by banks and insurers to determine their worst loss in a 1 in 200 probability scenario (the Value at Risk or “VaR” approach) are instead filled with mathematical models primed with rather a lot of assumptions.

The prosecution said Gary Bolton sold his boxes for up to £10,000 each, claiming they could detect explosives. Towers Watson’s RiskAgility (the dominant model in the UK insurance market) by contrast is difficult to price, as it is “bespoke” for each client. However, according to Insurance ERM magazine in October 2011, for Igloo, their other financial modelling platform, “software solutions range from £50,000 to £500,000 but there is no upper limit as you can keep adding to your solution”.

Gary Bolton’s prosecutors claimed that “soldiers, police officers, customs officers and many others put their trust in a device which worked no better than random chance”. Similar things could be said about bankers during 2008 about a device which worked worse the further the financial variables being modelled strayed from the normal distribution.

As he passed sentence, Judge Richard Hone QC described the equipment as “useless” and “dross” and said Bolton had damaged the reputation of British trade abroad. By contrast, despite a brief consideration of alternatives to the VaR approach by the Basel Committee on Banking Supervision in 2012, it remains firmly in place as the statutory measure of solvency for both banks and insurers.

The court was told Bolton knew the devices – which were also alleged to be able to detect drugs, tobacco, ivory and cash – did not work, but continued to supply them to be sold to overseas businesses. In Value at Risk: Any Lessons from the Crash of Long-Term Capital Management (LTCM)? Mete Feridun of Loughborough University in Spring 2005 set out to analyse the failure of the Long Term Capital Management (LTCM) hedge fund in 1998 from a risk management perspective, aiming at deriving implications for the managers of financial institutions and for the regulating authorities. This study concluded that the LTCM’s failure could be attributed primarily to its VaR system, which failed to estimate the fund’s potential risk exposure correctly. Many other studies agreed.

“You were determined to bolster the illusion that the devices worked and you knew there was a spurious science to produce that end,” Judge Hone said to Bolton. This brings to mind the actions of Philippe Jorion, Professor of Finance at the Graduate School of Management at the University of California at Irvine, who, by the winter of 2009 was already proclaiming that “VaR itself was not the culprit, however. Rather it was the way this risk management tool was employed.” He also helpfully pointed out that LTCM were very profitable in 2005 and 2006. He and others have been muddying the waters ever since.

“They had a random detection rate. They were useless.” concluded Judge Hone. Whereas VaR had a protective effect only within what were regarded as “possible” market environments, ie something similar to what had been seen before during relatively calm market conditions. In fact, VaR became less helpful the more people adopted it, as everyone using it ended up with similar trading positions, which they then attempted to exit at the same time. This meant that buyers could not be found when they were needed and the positions of the hapless VaR customers tanked even further.

Gary Bolton’s jurors concluded that, if you sell people a box that tells them they are safe when they are not, it is morally reprehensible. I think I agree with them.

Risky business

I think if I were to ask you what you thought the best way to manage risk was, there would be a significant risk that you would give me a very boring answer. I imagine it would involve complicated mathematical valuation systems, stochastic models and spreadsheets, lots of spreadsheets, risk indicators, traffic light arrangements, risk registers. If you work for an insurance company, particularly on the actuarial side, it would be very quantified, with calculations of the reserves required to meet “1 in 200 year” risks featuring heavily. Recently even operational risk is increasingly being approached from a more quantifiable angle, with Big Data being collected across many users to pool and estimate risk probabilities.

Now you can argue about these approaches, and particularly about the Value at Risk (VaR) tool which has brought this 1 in 200 probability over the next year into nearly every risk calculation carried out in the financial sector, and the Gaussian copula which allows you to take advantage of a correlation matrix to take credit for the “fact” that combinations of very bad things happening are vanishingly rare (the “Gaussian” referring to the normal distribution that makes events more than three standard deviations or “sigma” away from the average vanishingly rare), rather than actually quite likely once the market environment gets bleak enough. The losses at Fortis and AIG 2008 were over 5 and 16 sigma above their averages respectively.

The news last week that the US Attorney for the Southern District of New York had charged two JP Morgan traders with fraud in connection with the recent $6.2 billion “London whale” trading losses reminded me that VaR as it is currently used was largely cooked up at JP Morgan in the early 90s. VaR is now inescapable in the financial industry, having now effectively been baked into both the Basel 2 regulatory structure for banks and Solvency 2 for insurers.

The common approaches to so-called “quantifiable” risk may have their critics, but at least they are being widely discussed (the famous debate from 1997 between Phillippe Jorion and Nassim Nicholas Taleb just one such discussion). However, one of the other big problems with risk management is that we rarely get off the above “boring” topics, and people who don’t get the maths often think therefore that risk management is difficult to understand. In my view we should be talking much more about what companies are famous for (because this is also where their vulnerability lies) and the small number of key people they totally rely on (not all of whom they may even be aware of).

If you asked most financial firms what they were famous for, I imagine that having a good reputation as a company that can be trusted with your money would score pretty highly.

A recent survey of the impact of the loss of reputation amongst financial services companies on Wall Street revealed that 44% of them lost 5% or more in business in the past 12 months due to ongoing reputation and customer satisfaction issues. Losses based on total sales of these companies are estimated at hundreds of millions of dollars. There was an average loss of 9% of business among all companies surveyed.

And the key people we totally rely on? Well, just looking at the top five rogue traders (before the London Whale), we have:

1. SocGen losing 4.9 billion Euros in 2008 when Jerome Kerviel was found guilty of breach of trust, forgery and unauthorized use of the bank’s computers in their Paris office with respect to European Stock Index futures.
2. Sumitomo Corp losing $2.6 billion in 1996 when Yasuo Hamanaka made unauthorised trades while controlling 5% of the world’s copper market from Tokyo.
3. UBS losing $2.3 billion in 2011 when Kweku Adoboli was found guilty of abusing his position as an equity trade support analyst in London with unauthorised futures trading.
4. Barings Bank losing $1.3 billion in 1995 when Nick Leeson made unauthorised speculative trades (specifically in Nikkei Index futures) as a derivatives broker also in London.
5. Resona Holdings losing $1.1 billion in 1995 when Toshihide Iguchi made 30,000 unauthorized trades over a period of 11 years beginning in 1984 in US Treasury bonds in Osaka and New York.

None of these traders will, of course, have done anything for the reputations of their respective organisations either.

These are risks that can’t be managed by just throwing money at them or constructing complicated mathematical models. Managing them effectively requires intimate knowledge of your customers and what is most important in your relationship with them, who your key people are (not necessarily the most senior, Jerome Kerviel was only a junior trader at his bank) and what they are up to on a daily basis, ie what has always been understood as good business management.

And that doesn’t involve any boring mathematics at all.

The Antifragility of Restaurants and Terrorism

I have been thinking about the turnover of restaurants in Birmingham recently. There have been a number of new launches in the city in the last year, from Adam’s, with Michelin starred Adam Stokes, to Café Opus at Ikon to Le Truc, each replacing struggling previous ventures.

Nassim Nicholas Taleb makes the case, in his book Antifragile, for the antifragility of restaurants. As he says: Restaurants are fragile, they compete with each other, but the collective of local restaurants is antifragile for that very reason. Had restaurants been individually robust, hence immortal, the overall business would be either stagnant or weak, and would deliver nothing better than cafeteria food – and I mean Soviet-style cafeteria food. Further, it would be marred with systemic shortages, with, once in a while, a complete crisis and government bailout. All that quality, stability, and reliability are owed to the fragility of the restaurant itself.

I wondered if this argument could be extended to terrorism, in an equally Talebian sense.

But first, three false premises:

1. Terrorist attack frequency follows a power law distribution.

Following on from my previous post, I thought I had found another power law distribution in Nate Silver’s book The Signal and the Noise. He sets out a graph of the terrorist attack frequencies by death toll. The source of the data was the Global Terrorism Database for NATO countries from 1979 to 2009. I thought I would check this and downloaded an enormous 45Mb Excel file from the National Consortium for the Study of Terrorism and Responses to Terrorism (START). I decided to use the entire database (ie from 1970 to 2011), with the proviso that I would use only attacks leading to at least 5 deaths to keep it manageable (as Nate Silver had done). The START definition of terrorism is that it is only committed by NGOs, and they also had a strange way of numbering attacks which, for instance, counted 9-11 as four separate attacks (I adjusted for this). I then used a logarithmic scale on each axis and the result is shown below. Not even straightish, so probably not quite a power law distribution, it has a definite downward curve and something else entirely happening when deaths get above 500.

Terrorist attacks

In my view it certainly doesn’t support Nate’s contention of a power law distribution at the top end. On the contrary, it suggests that we can expect something worse, ie more frequent attacks with high casualties, than a power law would predict.

So what possible link could there be between terrorism and the demise of the Ikon café (there may be other restaurants where the food served met one of the other definitions of terrorism used by the Global Terrorism Database, ie intending to induce fear in an audience beyond the immediate victims, but not the Ikon)? Well, for one thing, they do have a made up statistic in common:

2. 90% of new restaurants fail within the first year.

This is a very persistent myth, most recently repeated in Antifragile, which was debunked as long ago as 2007. However, new business failures in general are still up at around 25% in the first year, which means the point that the pool of restaurants is constantly renewed by people with new ideas at the expense of those with failing ones remains valid. This process makes the restaurant provision as a whole better as a result of the fragility of its individual members.

3. 90% of terrorist groups fail within the first year.

Now I don’t know for certain whether this conjecture by David Rapoport is false, but given my experience with the last two “facts”, I would be very sceptical that the data (i) exists and (ii) is well-defined enough to give a definitive percentage. However, clearly there is a considerable turnover amongst these groups, and the methods used by them have developed often more quickly than the measures taken to counter them. Each new major terrorist attempt appears to result in some additional loss of freedom for the general public, whether it be what you can carry onto an aircraft or the amount of general surveillance we are all subjected to.

So what else do restaurants and terrorism have in common? What does a restaurant do when public tastes change? It either adapts itself or dies and is replaced by another restaurant better able to meet them. What does a terrorist group do when it has ceased to be relevant? It either changes its focus, or gets replaced in support by a group that already has. However, although individual terrorist groups will find themselves hunted down, killed, negotiated with, made irrelevant or, occasionally, empowered out of existence, new groups will continue to spring up in new forms and with new causes, ensuring that terrorism overall will always be with us and, indeed, strengthening with each successive generation.

The frequency of terrorist attacks, particularly at the most outrageous end, over the last 40 years would suggest that terrorism itself, despite the destruction of most of the people practising it amongst the mayhem they cause, has indeed proved at least as antifragile as restaurants. So, in the same way that we are all getting fed better, more and more people and resources are also being sucked into a battle which looks set to continue escalating. Because the nature of terrorism is, like the availability of pizza in your neighbourhood, that it benefits from adversity.

This suggests to me:

a. that we should rethink the constant upping of security measures against a threat which is only strengthened by them; and
b. that you shouldn’t believe everything you read.

Earthquakes and Equities

Plotting the frequency of earthquakes higher than a given magnitude on a logarithmic scale gives a straightish line that suggests we might expect a 9.2 earthquake every 100 years or so somewhere in the world and a 9.3 or 9.4 every 200 years or so (the Tohoku earthquake which led to the Fukushima disaster was 9.0). Such a distribution is known as a power-law distribution, which gives more room for action at the extreme ends than the more familiar bell-shaped normal distribution, which gives much lower probabilities for extreme events.

earthquakes

Similarly, plotting the annual frequency of one day falls in the FTSE All Share index higher than a given percentage on a logarithmic scale also (as you can see below) gives a straightish line, indicating that equity movements may also follow a power-law distribution, rather than the normal distribution (or log normal, where the logarithms are assumed to have a normal distribution) they are often modelled with.
However the similarity ends there, because of course earthquakes normally do most of their damage in one place and on the one day, rather than in the subsequent aftershocks (although there have been exceptions to this: in The Signal and the Noise, Nate Silver cites a series of earthquakes on the Missouri-Tennessee border between December 1811 and February 1812 of magnitude 8.2, 8.2, 8.1 and 8.3 respectively). On the other hand, large equity market falls often form part of a sustained trend (eg the FTSE All Share lost 49% of its value between 11 June 2007 and 2 March 2009) with regional if not global impacts, which is why insurers and other financial institutions which regularly carry out stress testing on their financial positions tend to concern themselves with longer term falls in markets, often focusing on annual movements.

equities

How you measure it obviously depends on the data you have. My dataset on earthquakes spans nearly 50 years, whereas my dataset for one day equity falls only starts on 31 December 1984, which was the earliest date from which I could easily get daily closing prices. However, as the Institute of Actuaries’ Benchmarking Stochastic Models Working Party report on Modelling Extreme Market Events pointed out in 2008, the worst one-year stock market loss in UK recorded history was from the end of November 1973 to the end of November 1974, when the UK market (measured on a total return basis) fell by 54%. So, if you were using 50 years of one year falls rather than 28.5 years of one day falls, a fall of 54% then became a 1 in 50 year event, but it would become a 1 in 1,000 year event if you had the whole millennium of data.

On the other hand, if your dataset is 38 years or less (like mine) it doesn’t include a 54% annual fall at all. Does this mean that you should try and get the largest dataset you can when deciding on where your risks are? After all, Big Data is what you need. The more data you base your assumptions on the better, right?

Well not necessarily. As we can already see from the November 1973 example, a lot of data where nothing very much happens may swamp the data from the important moments in a dataset. For instance, if I exclude the 12 biggest one day movements (positive and negative) from my 28.5 year dataset, I get a FTSE All Share closing price on the 18 July 2013 of 4,494 rather than 3,513, ie 28% higher.

Also, using more data only makes sense if that data is all describing the same thing. But what if the market has fundamentally changed in the last 5 years? What if the market is changing all the time and no two time periods are really comparable? If you believe this you should probably only use the most recent data, because the annual frequency of one day falls of all percentages appears to be on the rise. For one day falls of at least 2%, the annual frequency from the last 5 years is over twice that for the whole 28.5 year dataset (see graph above). For one day falls of at least 5%, the last 5 years have three times the annual frequency of the whole dataset. The number of instances of one day falls over 5.3% drop off sharply so it becomes more difficult to draw comparisons at the extreme end, but the slope of the 5 year data does appear to be significantly less steep than for the other datasets, ie expected frequencies of one day falls at the higher levels would also be considerably higher based on the most recent data.

Do the last 5 years represent a permanent change to markets or are they an anomaly? There are continual changes to the ways markets operate which might suggest that the markets we have now may be different in some fundamental way. One such change is the growth of the use of models that take an average return figure and an assumption about volatility and from there construct a whole probability distribution (disturbingly frequently the normal or log normal distribution) of returns to guide decisions. Use of these models has led to much more confidence in predictions than in past times (after all, the print outs from these models don’t look like the fingers in the air they actually are) and much riskier behaviour as a result (particularly, as Pablo Triana shows in his book Lecturing Birds on Flying, when traders are not using the models institutional investors assume they are in determining asset prices). Riskier behaviour with respect to how much capital to set aside and how much can be safely borrowed for instance, all due to too much confidence in our models and the Big Data they work off.

Because that is what has really changed. Ultimately markets are just places where we human beings buy and sell things, and we probably haven’t evolved all that much since the first piece of flint or obsidian was traded in the stone age. But our misplaced confidence in our ability to model and predict the behaviour of markets is very much a modern phenomenon.

Just turning the handle on your Big Data will not tell you how big the risks you know about are. And of course it will tell you nothing at all about the risks you don’t yet know about. So venture carefully in the financial landscape. A lot of that map you have in front of you is make-believe.