There are many reasons why it is much harder for a small actuarial consulting firm to do business than a large one. Large firms can obviously afford to put people on the committees which design actuarial regulation, whereas small practitioners tend not to be able to spare the billing time lost. This has resulted in many recent developments, in regulation in particular, disproportionately favouring larger firms.

The Technical Actuarial Standards (TAS), whatever your opinion of them and I am generally in favour, have spawned TAS committees in larger firms and, in all firms, has required a redesign of most advice given by pensions actuaries. This has been bad enough for large firms, but much more difficult for firms with one or two actuaries. Large firms can devote resources to producing the personality-free template documents we see springing up all over the place and have a ready source of peer advice to help apply the TASs to new documents as they become necessary. The “tick list” approach of GN9, GN11, GN16, GN19 and the rest, so heavily criticised by the now defunct Board for Actuarial Standards (BAS) when introducing the TASs, did at least make compliance relatively straightforward for small firms, allowing them to concentrate on the far more important and personal task of tailoring advice to the specific needs of their clients.

The new guidance for actuaries on conflicts of interest is similarly slanted. The suggestions are almost all big company solutions, from separation of teams to information barriers to setting up conflicts committees, designed to protect the income of firms with multiple offices from the loss of the ability to provide advice to connected parties. The one man business is pretty much left with “ceasing to act” as a strategy, leaving the field even clearer for the bigger firms.

I have been vaguely aware of this for some time, but since I left a medium-sized consultancy last year and started providing peer review services to small firms, it has been harder to ignore. I do not expect to continue as a sole trader over the long term, but I fear for those who do.

And the latest example that has struck me is the recent behaviour of the Continuous Mortality Investigation (CMI). This is an organisation with a proud tradition of providing analysis and resources on all aspects of mortality, longevity and morbidity to the Actuarial Profession. Anyone could access their materials for free, unlike Hymans Robertson’s Club Vita or the postcode analyses provided by companies like Longevitas. It was public data, available and accessible to public, academics, journalists and actuaries alike, working in the public interest.

No more. A fee structure has been put in place with effect from 1 April this year. Large consultancies will pay what, for them, is a flea bite of a fee. But I imagine some of the small firms will think twice about the relative costs of being locked out or the fee for continued access. And to demonstrate just how unfair it is, I have graphed the cost per qualified UK actuary below:

CMI fees.png

Apart from the fun to be had seeing how the formula impacts different consultancies (and speculating about some of the lobbying that might have been going on to achieve this) the graph shows us that the average cost starts at £250 for a firm with one actuary, but ends at around £30 per actuary for a firm the size of Towers Watson (mainly based on the number of UK actuaries listed in the latest actuarial directory – my apologies if any of these are out of date).

There are anomalies too. A firm with 20 actuaries pays £210 per actuary, whereas one with 21 pays £352 (the highest per actuary cost of all).

It is not as if these are avoidable costs. Funding and accounting cost mortality assumptions may not need to be updated every year but other routine work will. For instance, thanks to changes to the Statutory Money Purchase Illustrations (SMPI) technical memorandums since December 2011 (overseen by the Financial Reporting Council’s (FRC’s) actuarial council with, you guessed it, no one from a small actuarial firm on board), anyone without access to the CMI 2013 projections (which are the first to be pay-to-view) will be unable to provide SMPIs from 6 April next year.

This does not appear to me to be fair treatment of smaller actuarial firms, nor of their clients, who are also small firms. According to the Association of Consulting Actuaries’ (ACA’s) Second Report of the ACA Smaller Firms’ Pensions Survey, published earlier this year, small firms, which the larger consultancies increasingly are finding not cost-effective to service, represent a more and more important sector of the economy:

The small and medium-sized enterprises (SME) sector, here defined as businesses employing 250 or fewer employees, is the largest part of the UK private sector economy in terms of employment. These smaller firms employ over half of the UK’s private sector employees (59.1%) and generate just short of a half (48.8%) of all private sector turnover, amounting to some £1,500 billion per year. They make up over 99% of all UK private sector enterprises. The number of these SMEs has increased by 39% since 2000, whereas there are only just over 6,500 UK private sector enterprises that now employ 250 or more employees compared to 7,200 a decade or so ago (a reduction of 10% over the period).

If the CMI does have to charge for its services, then I would propose a flat per actuary fee, set at a rate designed to generate the same level of income, as a much fairer approach. Assuming this aimed at raising between £250,000 and £300,000 from consultancies next year, I estimate this should result in a per actuary fee of around £100. In my view that would be replacing the mortality of fairness with fairness of mortality.

Rise of a Pin 001

shutterstock_154809119The Wizard of EIOPA is back. Gabriel Bernardino, the chairman of the European Insurance and Occupational Pensions Authority (EIOPA), outlined in May the powers he thought were necessary for EIOPA to enhance its role as a European Supervisory Authority for the insurance market. Insurers are currently busily opposing such calls.

On 5 September, therefore, he changed tack with a speech about pensions instead, entitled Creation of a sustainable and adequate pension system in the EU and the role of EIOPA.

“The creation of an adequate, safe and sustainable pensions system is one of the key objectives of the European Union and EIOPA is committed to contribute by all means to the development of such a system,” he boomed, despite the EU’s Constitutional Treaty not mentioning pensions anywhere among its objectives. He went on to state that EIOPA approved of assets and liabilities being valued on a market consistent basis. So far, so uncontroversial.

But then he continued. “EIOPA suggested a number of elements to reinforce the governance of IORPs: for example the performance of an own risk and solvency assessment.” What? An ORSA? You mean that thing that has led the insurance industry to spend millions on consultants, conferences and which, 5 years on from when it was first mooted, is only currently fully implementable by 24% of European insurers with all of the resources available to them? (Moody’s Analytics, in their July 2013 survey of 45 European insurers, concluded that 24% of the insurers interviewed had their processes, methodologies and models in place to fulfil Pillar 2 requirements – the key one being the ORSA).

He wasn’t finished. EIOPA proposed to require defined contributions (DC) schemes to produce a Key Information Document (KID). This would contain “information about the objectives and investment policies, performance, costs and charges, contribution arrangements, a risk/reward profile and/or the time horizon adopted for the investment policy”. So, a massive amount of additional bureaucracy around the default funds in a DC scheme by the sound of it, with no appearance of understanding that members of DC schemes can choose their own investment policies, nor a word about the hottest issue in DC provision at the moment, ie the level of charges. Surely he is KIDding?

Next he carried on a bit about the “holistic balance sheet” concept, despite it having recently been kicked into the long grass for implementation by pension schemes he clearly desperately wants to bring it back if he can, before giving his interpretation of the findings of the quantitative impact study (QIS) as part of the process to advise the European Commission on the review of the IORP Directive. This boiled down to:

  • Some pension schemes have surpluses, others have deficits.
  • Schemes can either make up deficits by paying more contributions (“It is not unusual that future sponsor support needs to cover as much as 25% of liabilities.” Said the Wizard, with the air of a man discovering a new physical law) or by reducing benefits.

I think we can all agree that however much the QIS cost it was worth the money.

Then we moved onto the Swedish part of the speech. “I would like to take this opportunity to thank the Swedish pension industry, the Swedish pension protection scheme (PRI Pensionsgaranti) and the Swedish supervisor (Finansinspektionen) for their contributions to the QIS. Sweden was the only member state with sufficient financial assets to cover the pension liabilities as well as the solvency capital requirement. Pension funds showed on average even a substantial surplus over the SCR of 13% of liabilities. Of course, an important reason for these positive outcomes is that Sweden already imposes a prudential regime that is market consistent and risk based, by using the quarterly Traffic Light stress test. In my view, this clearly illustrates that a future European regulatory regime should be market consistent in order to ensure a comparable and realistic assessment of the financial situation of pension funds; and risk based in order to provide IORPs with the right incentives for managing risks.”

What? According to the Swedish Pension Agency’s annual report on the Swedish Pension System for 2012, it consists of a pay-as-you-go scheme (the inkomstpension) and effectively an insured personal pension (the premium pension). So it clearly illustrates precisely nothing for a defined benefit occupational pension system like that of the UK with total buy out liabilities of around £1,700 billion (according to the UK Pensions Regulator’s Purple Book for 2012). However, did I mention where this speech was taking place? Sweden.

It was time for the Wizard to return to building his empire. EIOPA need more resources, more powers and now, apparently, more of a mandate. Obviously the m-word is a bit of a problem for all European institutions but it grates on the Wizard particularly. Despite the Solvency II omnibus grinding along on the hard shoulder and the first attempt at new funding targets for occupational schemes being rebuffed, he believes that now the time is right for a foray into personal pensions. He refers to occupational pensions as “the so called 2nd pillar” which confused me at first as I thought that was the risk management part of Solvency 2. However, then I realised that it was just that Eurocrats are obsessed with pillars because now he is calling personal pensions “the so called 3rd pillar”. And he wants to run them with the same efficient competence we have come to associate with the Wizard.

The Wizard wants a new sort of personal pension known as an “EU retirement savings product”. This would avoid “the traps of the short term horizon” (ie pesky scheme members deciding they need their money earlier than EIOPA decree they can have it) and “managed using robust and modern risk management tools” (does he mean things like the stochastic techniques and Value at Risk methodologies which have shown no discernible ability to manage risk in banks to date?). Finally “it should have access to a European passport allowing for cross border selling”. There has been scope for UK occupational pension schemes to become cross border schemes for some time now. Hardly any have taken up this “opportunity” so far because it came with a requirement to immediately increase the level of funding of a defined benefit scheme to buy out level. For defined contribution schemes, as we saw with the fate of stakeholder pensions, the appropriate vehicle very much depends on the form of social security system in place, the degree of means testing and when it happens and, most importantly, how it interacts with the state pension system. One size will definitely not fit all, and I expect that the Wizard’s EU retirement savings product will remain a largely theoretical entity for this reason.

However, my favourite part of the Wizard’s speech was left until last. “It is our collective responsibility to face reality,” he offered, with no hint of irony. I fear that the Wizard’s sense of reality is more a type of magical realism where people can fly, or turn into unicorns and EU officials can regulate things they don’t understand without unintended consequences.

“Please help us to move in the right direction,” he concluded. I think we should all do precisely that, by opposing what the Wizard of EIOPA proposes for pension schemes of all kinds.

Lawyer attack adThe Advertising Standards Authority (ASA) has decided that no action will be taken against the Law Society as a result of their Don’t Get Mugged campaign, which ran during July and August. The advert encouraged accident victims to seek legal representation from solicitors rather than rely on a third-party capture offer from an insurer. The ASA considered that it neither denigrated insurers nor was likely to distress victims of actual muggings.

What this campaign does appear to do is open the door to professionals attacking each other overtly in their advertising (what US presidential candidates call “going negative” with “attack ads”) in their desire to get more business from the public. Supermarkets have done this from time to time, but it is relatively rare elsewhere. I still remember the Dillons advert from the eighties aimed at its main competitor, Foyles, (Foyled again? Try Dillons) because of its rarity. However it looks like we may be about to see a lot more of it. Will this give rise to insurers and banks going at it with independent financial advisers even more loudly over the level of independence of the advice being received, or solicitors and licensed conveyancers escalating hostilities? Will this help informed decision making by anyone? I somehow doubt it.

It seems unlikely that solicitors will turn the same type of emotional manipulation on each other any time soon, as the Solicitors’ Regulatory Authority sets out as one of its principles that solicitors should “behave in a way that maintains the trust the public places in you and in the provision of legal services”. So that leaves the field clear for other professions to get in there. I would suggest something like this (add the name of your profession as appropriate):

shutterstock_106513214

Let the games commence!

shutterstock_139285625When the GCSE results came out the other week I had a special reason to be interested as my daughter Polly was one of the anxious students waiting for them. As it turned out, she did very well, but I ended up listening to news coverage of the event which perhaps ordinarily I would have missed.

And how obnoxious it was! Despite a reassuring increase in students taking the more difficult subjects, and pass rates at all grades statistically no different from the previous year (nearly every media outlet seemed to report a drop, once again the direction deemed more important than the amount). Unless the numbers go up every year, apparently, none of us are happy.

When the steam (or hot air at least) had run out of these criticisms, people of my age seemed to be queuing up to appear on radio stations to tell today’s students that what they lacked was something called “grit”. We need to introduce a GCSE in Grit, they cried.

Grit. Really? This is the generation which has not had the grit to adequately tackle any issue which threatened the immediate earnings of the already rich and powerful, like climate change for instance, or a just tax system, either globally or even nationally.

But they are right in a way, because a generational war has been declared and the sooner today’s students wake up to this the better. We are not all in it together. The labels of “baby boomer” or Generation X, Y and Z are there to put us into economic camps (definitions vary, but I, at 50, am somewhere on the boomer-X boundary apparently, my children are Y and Z, or both Z, depending on the point someone is trying to squeeze out of the data). When they stumble out into the job market, today’s students risk having insufficient qualifications (because even when the numbers do all go up, employers cry “grade inflation” and pull up the drawbridge even further) for anything but a McJob, on zero hours contracts, or nothing at all, subject to youth curfews, ASBOs and acoustic dispersal devices. If they are lucky enough to be graduates they will have, in addition, a loan of at least £40,000 to repay. If they want to rent somewhere to live, they will be victim to an insufficiently regulated private rental market. If they want to buy, they are highly vulnerable to a property bubble being inflated for all its worth by George Osborne. It would be hard not to conclude that the rest of society had declared war on them while they were preparing for their gritless exams.

Meanwhile, the sense of entitlement amongst the boomers is frequently drowning out any other voices. Low interest rates are bad because they “attack” pensioners’ savings, and make annuities more expensive for those about to become pensioners. However this is just special pleading for one generational group. Low interest rates are good for making the Government’s money go further, and for spending on other priorities than the boomers.

Similarly high inflation is bad news if you’re a pensioner, and if your pension, as most are where the pensioner had a choice, is not inflation-linked. However, provided it is accompanied by earnings and economic growth, ultimately it is how a deficit burden, both private and public, is going to be shrunk most effectively.

The last thing Ys and Zs need is another 50-something lecturing them on what to do, but my plea would be that they don’t let these arguments be lost by default. The battle lines have been drawn. And I know which side I’m on.

scan0005

 

A man is sentenced to 7 years in prison for selling bomb detectors which had no hope of detecting bombs. The contrast with the fate of those who have continued to sell complex mathematical models to both large financial institutions and their regulators over 20 years, which have no hope of protecting them from massive losses at the precise point when they are required, is illuminating.

The devices made by Gary Bolton were simply boxes with handles and antennae. The “black boxes” used by banks and insurers to determine their worst loss in a 1 in 200 probability scenario (the Value at Risk or “VaR” approach) are instead filled with mathematical models primed with rather a lot of assumptions.

The prosecution said Gary Bolton sold his boxes for up to £10,000 each, claiming they could detect explosives. Towers Watson’s RiskAgility (the dominant model in the UK insurance market) by contrast is difficult to price, as it is “bespoke” for each client. However, according to Insurance ERM magazine in October 2011, for Igloo, their other financial modelling platform, “software solutions range from £50,000 to £500,000 but there is no upper limit as you can keep adding to your solution”.

Gary Bolton’s prosecutors claimed that “soldiers, police officers, customs officers and many others put their trust in a device which worked no better than random chance”. Similar things could be said about bankers during 2008 about a device which worked worse the further the financial variables being modelled strayed from the normal distribution.

As he passed sentence, Judge Richard Hone QC described the equipment as “useless” and “dross” and said Bolton had damaged the reputation of British trade abroad. By contrast, despite a brief consideration of alternatives to the VaR approach by the Basel Committee on Banking Supervision in 2012, it remains firmly in place as the statutory measure of solvency for both banks and insurers.

The court was told Bolton knew the devices – which were also alleged to be able to detect drugs, tobacco, ivory and cash – did not work, but continued to supply them to be sold to overseas businesses. In Value at Risk: Any Lessons from the Crash of Long-Term Capital Management (LTCM)? Mete Feridun of Loughborough University in Spring 2005 set out to analyse the failure of the Long Term Capital Management (LTCM) hedge fund in 1998 from a risk management perspective, aiming at deriving implications for the managers of financial institutions and for the regulating authorities. This study concluded that the LTCM’s failure could be attributed primarily to its VaR system, which failed to estimate the fund’s potential risk exposure correctly. Many other studies agreed.

“You were determined to bolster the illusion that the devices worked and you knew there was a spurious science to produce that end,” Judge Hone said to Bolton. This brings to mind the actions of Philippe Jorion, Professor of Finance at the Graduate School of Management at the University of California at Irvine, who, by the winter of 2009 was already proclaiming that “VaR itself was not the culprit, however. Rather it was the way this risk management tool was employed.” He also helpfully pointed out that LTCM were very profitable in 2005 and 2006. He and others have been muddying the waters ever since.

“They had a random detection rate. They were useless.” concluded Judge Hone. Whereas VaR had a protective effect only within what were regarded as “possible” market environments, ie something similar to what had been seen before during relatively calm market conditions. In fact, VaR became less helpful the more people adopted it, as everyone using it ended up with similar trading positions, which they then attempted to exit at the same time. This meant that buyers could not be found when they were needed and the positions of the hapless VaR customers tanked even further.

Gary Bolton’s jurors concluded that, if you sell people a box that tells them they are safe when they are not, it is morally reprehensible. I think I agree with them.

I think if I were to ask you what you thought the best way to manage risk was, there would be a significant risk that you would give me a very boring answer. I imagine it would involve complicated mathematical valuation systems, stochastic models and spreadsheets, lots of spreadsheets, risk indicators, traffic light arrangements, risk registers. If you work for an insurance company, particularly on the actuarial side, it would be very quantified, with calculations of the reserves required to meet “1 in 200 year” risks featuring heavily. Recently even operational risk is increasingly being approached from a more quantifiable angle, with Big Data being collected across many users to pool and estimate risk probabilities.

Now you can argue about these approaches, and particularly about the Value at Risk (VaR) tool which has brought this 1 in 200 probability over the next year into nearly every risk calculation carried out in the financial sector, and the Gaussian copula which allows you to take advantage of a correlation matrix to take credit for the “fact” that combinations of very bad things happening are vanishingly rare (the “Gaussian” referring to the normal distribution that makes events more than three standard deviations or “sigma” away from the average vanishingly rare), rather than actually quite likely once the market environment gets bleak enough. The losses at Fortis and AIG 2008 were over 5 and 16 sigma above their averages respectively.

The news last week that the US Attorney for the Southern District of New York had charged two JP Morgan traders with fraud in connection with the recent $6.2 billion “London whale” trading losses reminded me that VaR as it is currently used was largely cooked up at JP Morgan in the early 90s. VaR is now inescapable in the financial industry, having now effectively been baked into both the Basel 2 regulatory structure for banks and Solvency 2 for insurers.

The common approaches to so-called “quantifiable” risk may have their critics, but at least they are being widely discussed (the famous debate from 1997 between Phillippe Jorion and Nassim Nicholas Taleb just one such discussion). However, one of the other big problems with risk management is that we rarely get off the above “boring” topics, and people who don’t get the maths often think therefore that risk management is difficult to understand. In my view we should be talking much more about what companies are famous for (because this is also where their vulnerability lies) and the small number of key people they totally rely on (not all of whom they may even be aware of).

If you asked most financial firms what they were famous for, I imagine that having a good reputation as a company that can be trusted with your money would score pretty highly.

A recent survey of the impact of the loss of reputation amongst financial services companies on Wall Street revealed that 44% of them lost 5% or more in business in the past 12 months due to ongoing reputation and customer satisfaction issues. Losses based on total sales of these companies are estimated at hundreds of millions of dollars. There was an average loss of 9% of business among all companies surveyed.

And the key people we totally rely on? Well, just looking at the top five rogue traders (before the London Whale), we have:

1. SocGen losing 4.9 billion Euros in 2008 when Jerome Kerviel was found guilty of breach of trust, forgery and unauthorized use of the bank’s computers in their Paris office with respect to European Stock Index futures.
2. Sumitomo Corp losing $2.6 billion in 1996 when Yasuo Hamanaka made unauthorised trades while controlling 5% of the world’s copper market from Tokyo.
3. UBS losing $2.3 billion in 2011 when Kweku Adoboli was found guilty of abusing his position as an equity trade support analyst in London with unauthorised futures trading.
4. Barings Bank losing $1.3 billion in 1995 when Nick Leeson made unauthorised speculative trades (specifically in Nikkei Index futures) as a derivatives broker also in London.
5. Resona Holdings losing $1.1 billion in 1995 when Toshihide Iguchi made 30,000 unauthorized trades over a period of 11 years beginning in 1984 in US Treasury bonds in Osaka and New York.

None of these traders will, of course, have done anything for the reputations of their respective organisations either.

These are risks that can’t be managed by just throwing money at them or constructing complicated mathematical models. Managing them effectively requires intimate knowledge of your customers and what is most important in your relationship with them, who your key people are (not necessarily the most senior, Jerome Kerviel was only a junior trader at his bank) and what they are up to on a daily basis, ie what has always been understood as good business management.

And that doesn’t involve any boring mathematics at all.

I have been thinking about the turnover of restaurants in Birmingham recently. There have been a number of new launches in the city in the last year, from Adam’s, with Michelin starred Adam Stokes, to Café Opus at Ikon to Le Truc, each replacing struggling previous ventures.

Nassim Nicholas Taleb makes the case, in his book Antifragile, for the antifragility of restaurants. As he says: Restaurants are fragile, they compete with each other, but the collective of local restaurants is antifragile for that very reason. Had restaurants been individually robust, hence immortal, the overall business would be either stagnant or weak, and would deliver nothing better than cafeteria food – and I mean Soviet-style cafeteria food. Further, it would be marred with systemic shortages, with, once in a while, a complete crisis and government bailout. All that quality, stability, and reliability are owed to the fragility of the restaurant itself.

I wondered if this argument could be extended to terrorism, in an equally Talebian sense.

But first, three false premises:

1. Terrorist attack frequency follows a power law distribution.

Following on from my previous post, I thought I had found another power law distribution in Nate Silver’s book The Signal and the Noise. He sets out a graph of the terrorist attack frequencies by death toll. The source of the data was the Global Terrorism Database for NATO countries from 1979 to 2009. I thought I would check this and downloaded an enormous 45Mb Excel file from the National Consortium for the Study of Terrorism and Responses to Terrorism (START). I decided to use the entire database (ie from 1970 to 2011), with the proviso that I would use only attacks leading to at least 5 deaths to keep it manageable (as Nate Silver had done). The START definition of terrorism is that it is only committed by NGOs, and they also had a strange way of numbering attacks which, for instance, counted 9-11 as four separate attacks (I adjusted for this). I then used a logarithmic scale on each axis and the result is shown below. Not even straightish, so probably not quite a power law distribution, it has a definite downward curve and something else entirely happening when deaths get above 500.

Terrorist attacks

In my view it certainly doesn’t support Nate’s contention of a power law distribution at the top end. On the contrary, it suggests that we can expect something worse, ie more frequent attacks with high casualties, than a power law would predict.

So what possible link could there be between terrorism and the demise of the Ikon café (there may be other restaurants where the food served met one of the other definitions of terrorism used by the Global Terrorism Database, ie intending to induce fear in an audience beyond the immediate victims, but not the Ikon)? Well, for one thing, they do have a made up statistic in common:

2. 90% of new restaurants fail within the first year.

This is a very persistent myth, most recently repeated in Antifragile, which was debunked as long ago as 2007. However, new business failures in general are still up at around 25% in the first year, which means the point that the pool of restaurants is constantly renewed by people with new ideas at the expense of those with failing ones remains valid. This process makes the restaurant provision as a whole better as a result of the fragility of its individual members.

3. 90% of terrorist groups fail within the first year.

Now I don’t know for certain whether this conjecture by David Rapoport is false, but given my experience with the last two “facts”, I would be very sceptical that the data (i) exists and (ii) is well-defined enough to give a definitive percentage. However, clearly there is a considerable turnover amongst these groups, and the methods used by them have developed often more quickly than the measures taken to counter them. Each new major terrorist attempt appears to result in some additional loss of freedom for the general public, whether it be what you can carry onto an aircraft or the amount of general surveillance we are all subjected to.

So what else do restaurants and terrorism have in common? What does a restaurant do when public tastes change? It either adapts itself or dies and is replaced by another restaurant better able to meet them. What does a terrorist group do when it has ceased to be relevant? It either changes its focus, or gets replaced in support by a group that already has. However, although individual terrorist groups will find themselves hunted down, killed, negotiated with, made irrelevant or, occasionally, empowered out of existence, new groups will continue to spring up in new forms and with new causes, ensuring that terrorism overall will always be with us and, indeed, strengthening with each successive generation.

The frequency of terrorist attacks, particularly at the most outrageous end, over the last 40 years would suggest that terrorism itself, despite the destruction of most of the people practising it amongst the mayhem they cause, has indeed proved at least as antifragile as restaurants. So, in the same way that we are all getting fed better, more and more people and resources are also being sucked into a battle which looks set to continue escalating. Because the nature of terrorism is, like the availability of pizza in your neighbourhood, that it benefits from adversity.

This suggests to me:

a. that we should rethink the constant upping of security measures against a threat which is only strengthened by them; and
b. that you shouldn’t believe everything you read.

Plotting the frequency of earthquakes higher than a given magnitude on a logarithmic scale gives a straightish line that suggests we might expect a 9.2 earthquake every 100 years or so somewhere in the world and a 9.3 or 9.4 every 200 years or so (the Tohoku earthquake which led to the Fukushima disaster was 9.0). Such a distribution is known as a power-law distribution, which gives more room for action at the extreme ends than the more familiar bell-shaped normal distribution, which gives much lower probabilities for extreme events.

earthquakes

Similarly, plotting the annual frequency of one day falls in the FTSE All Share index higher than a given percentage on a logarithmic scale also (as you can see below) gives a straightish line, indicating that equity movements may also follow a power-law distribution, rather than the normal distribution (or log normal, where the logarithms are assumed to have a normal distribution) they are often modelled with.
However the similarity ends there, because of course earthquakes normally do most of their damage in one place and on the one day, rather than in the subsequent aftershocks (although there have been exceptions to this: in The Signal and the Noise, Nate Silver cites a series of earthquakes on the Missouri-Tennessee border between December 1811 and February 1812 of magnitude 8.2, 8.2, 8.1 and 8.3 respectively). On the other hand, large equity market falls often form part of a sustained trend (eg the FTSE All Share lost 49% of its value between 11 June 2007 and 2 March 2009) with regional if not global impacts, which is why insurers and other financial institutions which regularly carry out stress testing on their financial positions tend to concern themselves with longer term falls in markets, often focusing on annual movements.

equities

How you measure it obviously depends on the data you have. My dataset on earthquakes spans nearly 50 years, whereas my dataset for one day equity falls only starts on 31 December 1984, which was the earliest date from which I could easily get daily closing prices. However, as the Institute of Actuaries’ Benchmarking Stochastic Models Working Party report on Modelling Extreme Market Events pointed out in 2008, the worst one-year stock market loss in UK recorded history was from the end of November 1973 to the end of November 1974, when the UK market (measured on a total return basis) fell by 54%. So, if you were using 50 years of one year falls rather than 28.5 years of one day falls, a fall of 54% then became a 1 in 50 year event, but it would become a 1 in 1,000 year event if you had the whole millennium of data.

On the other hand, if your dataset is 38 years or less (like mine) it doesn’t include a 54% annual fall at all. Does this mean that you should try and get the largest dataset you can when deciding on where your risks are? After all, Big Data is what you need. The more data you base your assumptions on the better, right?

Well not necessarily. As we can already see from the November 1973 example, a lot of data where nothing very much happens may swamp the data from the important moments in a dataset. For instance, if I exclude the 12 biggest one day movements (positive and negative) from my 28.5 year dataset, I get a FTSE All Share closing price on the 18 July 2013 of 4,494 rather than 3,513, ie 28% higher.

Also, using more data only makes sense if that data is all describing the same thing. But what if the market has fundamentally changed in the last 5 years? What if the market is changing all the time and no two time periods are really comparable? If you believe this you should probably only use the most recent data, because the annual frequency of one day falls of all percentages appears to be on the rise. For one day falls of at least 2%, the annual frequency from the last 5 years is over twice that for the whole 28.5 year dataset (see graph above). For one day falls of at least 5%, the last 5 years have three times the annual frequency of the whole dataset. The number of instances of one day falls over 5.3% drop off sharply so it becomes more difficult to draw comparisons at the extreme end, but the slope of the 5 year data does appear to be significantly less steep than for the other datasets, ie expected frequencies of one day falls at the higher levels would also be considerably higher based on the most recent data.

Do the last 5 years represent a permanent change to markets or are they an anomaly? There are continual changes to the ways markets operate which might suggest that the markets we have now may be different in some fundamental way. One such change is the growth of the use of models that take an average return figure and an assumption about volatility and from there construct a whole probability distribution (disturbingly frequently the normal or log normal distribution) of returns to guide decisions. Use of these models has led to much more confidence in predictions than in past times (after all, the print outs from these models don’t look like the fingers in the air they actually are) and much riskier behaviour as a result (particularly, as Pablo Triana shows in his book Lecturing Birds on Flying, when traders are not using the models institutional investors assume they are in determining asset prices). Riskier behaviour with respect to how much capital to set aside and how much can be safely borrowed for instance, all due to too much confidence in our models and the Big Data they work off.

Because that is what has really changed. Ultimately markets are just places where we human beings buy and sell things, and we probably haven’t evolved all that much since the first piece of flint or obsidian was traded in the stone age. But our misplaced confidence in our ability to model and predict the behaviour of markets is very much a modern phenomenon.

Just turning the handle on your Big Data will not tell you how big the risks you know about are. And of course it will tell you nothing at all about the risks you don’t yet know about. So venture carefully in the financial landscape. A lot of that map you have in front of you is make-believe.

spikes colour