Boomers versus Y and Z

shutterstock_139285625When the GCSE results came out the other week I had a special reason to be interested as my daughter Polly was one of the anxious students waiting for them. As it turned out, she did very well, but I ended up listening to news coverage of the event which perhaps ordinarily I would have missed.

And how obnoxious it was! Despite a reassuring increase in students taking the more difficult subjects, and pass rates at all grades statistically no different from the previous year (nearly every media outlet seemed to report a drop, once again the direction deemed more important than the amount). Unless the numbers go up every year, apparently, none of us are happy.

When the steam (or hot air at least) had run out of these criticisms, people of my age seemed to be queuing up to appear on radio stations to tell today’s students that what they lacked was something called “grit”. We need to introduce a GCSE in Grit, they cried.

Grit. Really? This is the generation which has not had the grit to adequately tackle any issue which threatened the immediate earnings of the already rich and powerful, like climate change for instance, or a just tax system, either globally or even nationally.

But they are right in a way, because a generational war has been declared and the sooner today’s students wake up to this the better. We are not all in it together. The labels of “baby boomer” or Generation X, Y and Z are there to put us into economic camps (definitions vary, but I, at 50, am somewhere on the boomer-X boundary apparently, my children are Y and Z, or both Z, depending on the point someone is trying to squeeze out of the data). When they stumble out into the job market, today’s students risk having insufficient qualifications (because even when the numbers do all go up, employers cry “grade inflation” and pull up the drawbridge even further) for anything but a McJob, on zero hours contracts, or nothing at all, subject to youth curfews, ASBOs and acoustic dispersal devices. If they are lucky enough to be graduates they will have, in addition, a loan of at least £40,000 to repay. If they want to rent somewhere to live, they will be victim to an insufficiently regulated private rental market. If they want to buy, they are highly vulnerable to a property bubble being inflated for all its worth by George Osborne. It would be hard not to conclude that the rest of society had declared war on them while they were preparing for their gritless exams.

Meanwhile, the sense of entitlement amongst the boomers is frequently drowning out any other voices. Low interest rates are bad because they “attack” pensioners’ savings, and make annuities more expensive for those about to become pensioners. However this is just special pleading for one generational group. Low interest rates are good for making the Government’s money go further, and for spending on other priorities than the boomers.

Similarly high inflation is bad news if you’re a pensioner, and if your pension, as most are where the pensioner had a choice, is not inflation-linked. However, provided it is accompanied by earnings and economic growth, ultimately it is how a deficit burden, both private and public, is going to be shrunk most effectively.

The last thing Ys and Zs need is another 50-something lecturing them on what to do, but my plea would be that they don’t let these arguments be lost by default. The battle lines have been drawn. And I know which side I’m on.

Land mines and bank bailouts



A man is sentenced to 7 years in prison for selling bomb detectors which had no hope of detecting bombs. The contrast with the fate of those who have continued to sell complex mathematical models to both large financial institutions and their regulators over 20 years, which have no hope of protecting them from massive losses at the precise point when they are required, is illuminating.

The devices made by Gary Bolton were simply boxes with handles and antennae. The “black boxes” used by banks and insurers to determine their worst loss in a 1 in 200 probability scenario (the Value at Risk or “VaR” approach) are instead filled with mathematical models primed with rather a lot of assumptions.

The prosecution said Gary Bolton sold his boxes for up to £10,000 each, claiming they could detect explosives. Towers Watson’s RiskAgility (the dominant model in the UK insurance market) by contrast is difficult to price, as it is “bespoke” for each client. However, according to Insurance ERM magazine in October 2011, for Igloo, their other financial modelling platform, “software solutions range from £50,000 to £500,000 but there is no upper limit as you can keep adding to your solution”.

Gary Bolton’s prosecutors claimed that “soldiers, police officers, customs officers and many others put their trust in a device which worked no better than random chance”. Similar things could be said about bankers during 2008 about a device which worked worse the further the financial variables being modelled strayed from the normal distribution.

As he passed sentence, Judge Richard Hone QC described the equipment as “useless” and “dross” and said Bolton had damaged the reputation of British trade abroad. By contrast, despite a brief consideration of alternatives to the VaR approach by the Basel Committee on Banking Supervision in 2012, it remains firmly in place as the statutory measure of solvency for both banks and insurers.

The court was told Bolton knew the devices – which were also alleged to be able to detect drugs, tobacco, ivory and cash – did not work, but continued to supply them to be sold to overseas businesses. In Value at Risk: Any Lessons from the Crash of Long-Term Capital Management (LTCM)? Mete Feridun of Loughborough University in Spring 2005 set out to analyse the failure of the Long Term Capital Management (LTCM) hedge fund in 1998 from a risk management perspective, aiming at deriving implications for the managers of financial institutions and for the regulating authorities. This study concluded that the LTCM’s failure could be attributed primarily to its VaR system, which failed to estimate the fund’s potential risk exposure correctly. Many other studies agreed.

“You were determined to bolster the illusion that the devices worked and you knew there was a spurious science to produce that end,” Judge Hone said to Bolton. This brings to mind the actions of Philippe Jorion, Professor of Finance at the Graduate School of Management at the University of California at Irvine, who, by the winter of 2009 was already proclaiming that “VaR itself was not the culprit, however. Rather it was the way this risk management tool was employed.” He also helpfully pointed out that LTCM were very profitable in 2005 and 2006. He and others have been muddying the waters ever since.

“They had a random detection rate. They were useless.” concluded Judge Hone. Whereas VaR had a protective effect only within what were regarded as “possible” market environments, ie something similar to what had been seen before during relatively calm market conditions. In fact, VaR became less helpful the more people adopted it, as everyone using it ended up with similar trading positions, which they then attempted to exit at the same time. This meant that buyers could not be found when they were needed and the positions of the hapless VaR customers tanked even further.

Gary Bolton’s jurors concluded that, if you sell people a box that tells them they are safe when they are not, it is morally reprehensible. I think I agree with them.

Risky business

I think if I were to ask you what you thought the best way to manage risk was, there would be a significant risk that you would give me a very boring answer. I imagine it would involve complicated mathematical valuation systems, stochastic models and spreadsheets, lots of spreadsheets, risk indicators, traffic light arrangements, risk registers. If you work for an insurance company, particularly on the actuarial side, it would be very quantified, with calculations of the reserves required to meet “1 in 200 year” risks featuring heavily. Recently even operational risk is increasingly being approached from a more quantifiable angle, with Big Data being collected across many users to pool and estimate risk probabilities.

Now you can argue about these approaches, and particularly about the Value at Risk (VaR) tool which has brought this 1 in 200 probability over the next year into nearly every risk calculation carried out in the financial sector, and the Gaussian copula which allows you to take advantage of a correlation matrix to take credit for the “fact” that combinations of very bad things happening are vanishingly rare (the “Gaussian” referring to the normal distribution that makes events more than three standard deviations or “sigma” away from the average vanishingly rare), rather than actually quite likely once the market environment gets bleak enough. The losses at Fortis and AIG 2008 were over 5 and 16 sigma above their averages respectively.

The news last week that the US Attorney for the Southern District of New York had charged two JP Morgan traders with fraud in connection with the recent $6.2 billion “London whale” trading losses reminded me that VaR as it is currently used was largely cooked up at JP Morgan in the early 90s. VaR is now inescapable in the financial industry, having now effectively been baked into both the Basel 2 regulatory structure for banks and Solvency 2 for insurers.

The common approaches to so-called “quantifiable” risk may have their critics, but at least they are being widely discussed (the famous debate from 1997 between Phillippe Jorion and Nassim Nicholas Taleb just one such discussion). However, one of the other big problems with risk management is that we rarely get off the above “boring” topics, and people who don’t get the maths often think therefore that risk management is difficult to understand. In my view we should be talking much more about what companies are famous for (because this is also where their vulnerability lies) and the small number of key people they totally rely on (not all of whom they may even be aware of).

If you asked most financial firms what they were famous for, I imagine that having a good reputation as a company that can be trusted with your money would score pretty highly.

A recent survey of the impact of the loss of reputation amongst financial services companies on Wall Street revealed that 44% of them lost 5% or more in business in the past 12 months due to ongoing reputation and customer satisfaction issues. Losses based on total sales of these companies are estimated at hundreds of millions of dollars. There was an average loss of 9% of business among all companies surveyed.

And the key people we totally rely on? Well, just looking at the top five rogue traders (before the London Whale), we have:

1. SocGen losing 4.9 billion Euros in 2008 when Jerome Kerviel was found guilty of breach of trust, forgery and unauthorized use of the bank’s computers in their Paris office with respect to European Stock Index futures.
2. Sumitomo Corp losing $2.6 billion in 1996 when Yasuo Hamanaka made unauthorised trades while controlling 5% of the world’s copper market from Tokyo.
3. UBS losing $2.3 billion in 2011 when Kweku Adoboli was found guilty of abusing his position as an equity trade support analyst in London with unauthorised futures trading.
4. Barings Bank losing $1.3 billion in 1995 when Nick Leeson made unauthorised speculative trades (specifically in Nikkei Index futures) as a derivatives broker also in London.
5. Resona Holdings losing $1.1 billion in 1995 when Toshihide Iguchi made 30,000 unauthorized trades over a period of 11 years beginning in 1984 in US Treasury bonds in Osaka and New York.

None of these traders will, of course, have done anything for the reputations of their respective organisations either.

These are risks that can’t be managed by just throwing money at them or constructing complicated mathematical models. Managing them effectively requires intimate knowledge of your customers and what is most important in your relationship with them, who your key people are (not necessarily the most senior, Jerome Kerviel was only a junior trader at his bank) and what they are up to on a daily basis, ie what has always been understood as good business management.

And that doesn’t involve any boring mathematics at all.

The Antifragility of Restaurants and Terrorism

I have been thinking about the turnover of restaurants in Birmingham recently. There have been a number of new launches in the city in the last year, from Adam’s, with Michelin starred Adam Stokes, to Café Opus at Ikon to Le Truc, each replacing struggling previous ventures.

Nassim Nicholas Taleb makes the case, in his book Antifragile, for the antifragility of restaurants. As he says: Restaurants are fragile, they compete with each other, but the collective of local restaurants is antifragile for that very reason. Had restaurants been individually robust, hence immortal, the overall business would be either stagnant or weak, and would deliver nothing better than cafeteria food – and I mean Soviet-style cafeteria food. Further, it would be marred with systemic shortages, with, once in a while, a complete crisis and government bailout. All that quality, stability, and reliability are owed to the fragility of the restaurant itself.

I wondered if this argument could be extended to terrorism, in an equally Talebian sense.

But first, three false premises:

1. Terrorist attack frequency follows a power law distribution.

Following on from my previous post, I thought I had found another power law distribution in Nate Silver’s book The Signal and the Noise. He sets out a graph of the terrorist attack frequencies by death toll. The source of the data was the Global Terrorism Database for NATO countries from 1979 to 2009. I thought I would check this and downloaded an enormous 45Mb Excel file from the National Consortium for the Study of Terrorism and Responses to Terrorism (START). I decided to use the entire database (ie from 1970 to 2011), with the proviso that I would use only attacks leading to at least 5 deaths to keep it manageable (as Nate Silver had done). The START definition of terrorism is that it is only committed by NGOs, and they also had a strange way of numbering attacks which, for instance, counted 9-11 as four separate attacks (I adjusted for this). I then used a logarithmic scale on each axis and the result is shown below. Not even straightish, so probably not quite a power law distribution, it has a definite downward curve and something else entirely happening when deaths get above 500.

Terrorist attacks

In my view it certainly doesn’t support Nate’s contention of a power law distribution at the top end. On the contrary, it suggests that we can expect something worse, ie more frequent attacks with high casualties, than a power law would predict.

So what possible link could there be between terrorism and the demise of the Ikon café (there may be other restaurants where the food served met one of the other definitions of terrorism used by the Global Terrorism Database, ie intending to induce fear in an audience beyond the immediate victims, but not the Ikon)? Well, for one thing, they do have a made up statistic in common:

2. 90% of new restaurants fail within the first year.

This is a very persistent myth, most recently repeated in Antifragile, which was debunked as long ago as 2007. However, new business failures in general are still up at around 25% in the first year, which means the point that the pool of restaurants is constantly renewed by people with new ideas at the expense of those with failing ones remains valid. This process makes the restaurant provision as a whole better as a result of the fragility of its individual members.

3. 90% of terrorist groups fail within the first year.

Now I don’t know for certain whether this conjecture by David Rapoport is false, but given my experience with the last two “facts”, I would be very sceptical that the data (i) exists and (ii) is well-defined enough to give a definitive percentage. However, clearly there is a considerable turnover amongst these groups, and the methods used by them have developed often more quickly than the measures taken to counter them. Each new major terrorist attempt appears to result in some additional loss of freedom for the general public, whether it be what you can carry onto an aircraft or the amount of general surveillance we are all subjected to.

So what else do restaurants and terrorism have in common? What does a restaurant do when public tastes change? It either adapts itself or dies and is replaced by another restaurant better able to meet them. What does a terrorist group do when it has ceased to be relevant? It either changes its focus, or gets replaced in support by a group that already has. However, although individual terrorist groups will find themselves hunted down, killed, negotiated with, made irrelevant or, occasionally, empowered out of existence, new groups will continue to spring up in new forms and with new causes, ensuring that terrorism overall will always be with us and, indeed, strengthening with each successive generation.

The frequency of terrorist attacks, particularly at the most outrageous end, over the last 40 years would suggest that terrorism itself, despite the destruction of most of the people practising it amongst the mayhem they cause, has indeed proved at least as antifragile as restaurants. So, in the same way that we are all getting fed better, more and more people and resources are also being sucked into a battle which looks set to continue escalating. Because the nature of terrorism is, like the availability of pizza in your neighbourhood, that it benefits from adversity.

This suggests to me:

a. that we should rethink the constant upping of security measures against a threat which is only strengthened by them; and
b. that you shouldn’t believe everything you read.

Earthquakes and Equities

Plotting the frequency of earthquakes higher than a given magnitude on a logarithmic scale gives a straightish line that suggests we might expect a 9.2 earthquake every 100 years or so somewhere in the world and a 9.3 or 9.4 every 200 years or so (the Tohoku earthquake which led to the Fukushima disaster was 9.0). Such a distribution is known as a power-law distribution, which gives more room for action at the extreme ends than the more familiar bell-shaped normal distribution, which gives much lower probabilities for extreme events.


Similarly, plotting the annual frequency of one day falls in the FTSE All Share index higher than a given percentage on a logarithmic scale also (as you can see below) gives a straightish line, indicating that equity movements may also follow a power-law distribution, rather than the normal distribution (or log normal, where the logarithms are assumed to have a normal distribution) they are often modelled with.
However the similarity ends there, because of course earthquakes normally do most of their damage in one place and on the one day, rather than in the subsequent aftershocks (although there have been exceptions to this: in The Signal and the Noise, Nate Silver cites a series of earthquakes on the Missouri-Tennessee border between December 1811 and February 1812 of magnitude 8.2, 8.2, 8.1 and 8.3 respectively). On the other hand, large equity market falls often form part of a sustained trend (eg the FTSE All Share lost 49% of its value between 11 June 2007 and 2 March 2009) with regional if not global impacts, which is why insurers and other financial institutions which regularly carry out stress testing on their financial positions tend to concern themselves with longer term falls in markets, often focusing on annual movements.


How you measure it obviously depends on the data you have. My dataset on earthquakes spans nearly 50 years, whereas my dataset for one day equity falls only starts on 31 December 1984, which was the earliest date from which I could easily get daily closing prices. However, as the Institute of Actuaries’ Benchmarking Stochastic Models Working Party report on Modelling Extreme Market Events pointed out in 2008, the worst one-year stock market loss in UK recorded history was from the end of November 1973 to the end of November 1974, when the UK market (measured on a total return basis) fell by 54%. So, if you were using 50 years of one year falls rather than 28.5 years of one day falls, a fall of 54% then became a 1 in 50 year event, but it would become a 1 in 1,000 year event if you had the whole millennium of data.

On the other hand, if your dataset is 38 years or less (like mine) it doesn’t include a 54% annual fall at all. Does this mean that you should try and get the largest dataset you can when deciding on where your risks are? After all, Big Data is what you need. The more data you base your assumptions on the better, right?

Well not necessarily. As we can already see from the November 1973 example, a lot of data where nothing very much happens may swamp the data from the important moments in a dataset. For instance, if I exclude the 12 biggest one day movements (positive and negative) from my 28.5 year dataset, I get a FTSE All Share closing price on the 18 July 2013 of 4,494 rather than 3,513, ie 28% higher.

Also, using more data only makes sense if that data is all describing the same thing. But what if the market has fundamentally changed in the last 5 years? What if the market is changing all the time and no two time periods are really comparable? If you believe this you should probably only use the most recent data, because the annual frequency of one day falls of all percentages appears to be on the rise. For one day falls of at least 2%, the annual frequency from the last 5 years is over twice that for the whole 28.5 year dataset (see graph above). For one day falls of at least 5%, the last 5 years have three times the annual frequency of the whole dataset. The number of instances of one day falls over 5.3% drop off sharply so it becomes more difficult to draw comparisons at the extreme end, but the slope of the 5 year data does appear to be significantly less steep than for the other datasets, ie expected frequencies of one day falls at the higher levels would also be considerably higher based on the most recent data.

Do the last 5 years represent a permanent change to markets or are they an anomaly? There are continual changes to the ways markets operate which might suggest that the markets we have now may be different in some fundamental way. One such change is the growth of the use of models that take an average return figure and an assumption about volatility and from there construct a whole probability distribution (disturbingly frequently the normal or log normal distribution) of returns to guide decisions. Use of these models has led to much more confidence in predictions than in past times (after all, the print outs from these models don’t look like the fingers in the air they actually are) and much riskier behaviour as a result (particularly, as Pablo Triana shows in his book Lecturing Birds on Flying, when traders are not using the models institutional investors assume they are in determining asset prices). Riskier behaviour with respect to how much capital to set aside and how much can be safely borrowed for instance, all due to too much confidence in our models and the Big Data they work off.

Because that is what has really changed. Ultimately markets are just places where we human beings buy and sell things, and we probably haven’t evolved all that much since the first piece of flint or obsidian was traded in the stone age. But our misplaced confidence in our ability to model and predict the behaviour of markets is very much a modern phenomenon.

Just turning the handle on your Big Data will not tell you how big the risks you know about are. And of course it will tell you nothing at all about the risks you don’t yet know about. So venture carefully in the financial landscape. A lot of that map you have in front of you is make-believe.

My ETV – part two

At the end of my previous post, I was keenly awaiting the written report on my enhanced transfer value (ETV) consultation, after feeling some concerns about the process up to that point. What arrived earlier this month came in three part harmony:

1. A Transfer Suitability Report, which summarised the conversation I had had with my adviser, and the recommendation which I had rather wrung out of him not to transfer (a red traffic light illustration next to the summary reinforced the point), and included the modeller output that suggested a 9 in 10 chance of receiving a higher income at retirement (weather symbol: sunny).

sunnyThere was nothing more for me to read on the assumptions here while I waited at the red light in the sunny weather but, instead, a new concept to anyone not working in pensions for a living which had not been mentioned in our previous conversation: critical yield. It explained that this was “the estimated investment return you would need to achieve year on year, if you were to transfer to a personal pension, in order to match the benefits provided by the Scheme at retirement”. It was calculated at 6.4%.

This was a little confusing since, when put together with the sunny 9 out of 10 assessment of my chances of receiving a higher income at retirement, it might lead you to think that there was a 90% chance of at least a 6.4% pa average investment growth over the next 10 years based on my new medium risk tolerance (which only reduced my equity allocation from 90% to 85%). But in fact 9 out of 10 was based on needing no spouse pension (they thought this reasonable as I am currently separated, but my Scheme benefits will include a spouse pension provided I have a spouse at retirement) and lower pension increases than are provided by the Scheme (these are indexed to the Retail Prices Index (RPI) rather than the Consumer Prices Index (CPI) assumed after the transfer, CPI tending to be lower). So 9 out of 10 was not replacing like with like, and the probability of achieving the critical yield over the next 10 years was more likely to be in the cloudy-with-a-chance-of-rain category.

2. Additional Information. This must contain the assumptions used, I thought. But no. It was instead an overview of how they had selected Aviva to be the pension provider, what the pension protection fund and financial services compensation scheme did and a glossary of terms. The glossary, interestingly, included lifestyling. “Lifestyling”, it said, “is an investment approach in which funds are gradually switched from more volatile asset classes, such as UK and Overseas Equities, to lower risk investments, such as Fixed Interest and Cash, in the period leading up to retirement. The aim of lifestyling is to reduce the risk of large fluctuations in your fund value as you approach your chosen retirement age. The reason for this is that, if markets were to fall significantly immediately before you retire, this would lead to significant reduction in your retirement income.” Lifestyling had not previously been mentioned as being assumed to be taking place over the next 10 years in any of my illustrations. This eagerly awaited report appeared to be raising more questions than it was answering.

3. Transfer Value Analysis Report. This gave more details about the benefits I was currently entitled to and that the projections of future income were based on CPI increases of 2% pa. And then finally, in the final appendix of the final report, there were notes on the assumptions underlying the calculation of the critical yield. Unhelpfully this included an annuity interest rate and annuity expense assumptions, but no mortality assumption. You would obviously need to know how long you were expected to live to work out how much they expected the annuity to cost. Or they could just have told me. Unfortunately, how much the annuity was expected to cost seemed to be on the list of things the member was not expected to need to know.

So, at the end of the process, I was still no wiser about the annuity rates assumed, or what high, medium and low meant in the years leading up to retirement. I didn’t think that the adviser I had knew either. And on this basis I was being asked to make an irrevocable decision about a third (more if you considered the cost of purchasing an equivalent guaranteed deferred annuity rather than the transfer values offered) of my pensions wealth.

I reflected on the times in the past when I had advised trustees to ensure as a minimum that transferring members in such exercises received independent advice, and on how inadequate that now seemed to be to support a decision in this case. As far as I could see everyone involved was doing their job in the way the regulatory regime intended them to. It was, in many ways, a model process:

  • The sponsor was making an offer to members, and paying for independent advice to those members. If the advice was not to transfer or the member decided not to take any advice, the transfer was not allowed to proceed.
  • The independent adviser had made a modeller available to members, and had carried out an assessment of each member’s attitude to investment risk. However both of these were seen as guides only, and they were prepared to be influenced in their advice by the attitudes presented to them directly by the members.
  • The Trustee Board had made it clear that it was up to the members to decide and that members should consider any information provided carefully before opting for a transfer.

However, if I had accepted the original risk assessment, and let large parts of the information provided go over my head as too technical, I could well have been both advised to transfer and left with the impression that I had a 9 out of 10 chance of being better off as a result. This would not have been a remotely accurate impression. However, even if I had avoided that particular banana skin, I would still not, at the end of this totally professional and, at first sight, thorough process, have had enough information to decide whether I agreed with the advice given. This meant that, despite everyone’s best efforts here, it would still have been possible to have been missold a transfer.

That that should still be the case after all the regulatory activity in this area suggests to me that there is a limit to what regulators can achieve when it is seen as enough for the regulated to merely follow codes of practice and guidance. To aim higher than this requires both trustees and their advisers to do more than play the referee.

And my pension is staying where it is.



My ETV – part one

Diamond graphA couple of weeks ago, I had a session with Beaufort Consulting. They had been selected by the Phoenix Group to provide independent financial advice to members of the Pearl Group Staff Pension Scheme who had been offered an enhanced transfer value (ETV).

The aim of an ETV is simple. The sponsors of the scheme are looking to reduce the uncertainty and cost (the ETV is normally considerably less than the cost of purchasing an annuity with an insurer to an equivalent level to the pension given up). I have been the actuary to schemes in the past where the sponsor has carried out such exercises and, beyond advising the trustees to press the sponsor for certain minimum standards (for example independent financial advice, communication of risks and making sure the security of the non-transferring members is maintained), it has been frustrating to watch members seeming to give up the security of their benefits in many cases with rather little to show for it. I was curious to experience the process from the member’s perspective.

I had been warned by the Trustee Board of the Scheme that an exercise was going to be taking place in February. Then last month I received a transfer value quotation from the Phoenix Group, indicating that not only would the current reduction to transfer values of 10% be removed, but that an enhancement of a further 10% would be added. I had six weeks to register for advice with the Beaufort Group, and a further six weeks to accept the offer before it was withdrawn. I was directed to the modelling tools on Beaufort’s website and my attention was drawn to the Code of Good Practice and the Pension Regulator’s guidance on such offers. An “important additional information” booklet, in the form of questions and answers on the overall process, was also enclosed. From Beaufort consulting I received a client agreement, a key facts document and log in details for their website (referred to as the “Member Advisory Platform” or MAP).

Whew! So I went on the website and answered the 15 questions designed to assess my risk profile. I was interested to note, despite indicating that I tended to disagree with accepting the possibility of greater losses to achieve high investment growth and rating the amount of risk I had taken in the past as medium compared to other people, that I had been categorised as having a risk rating of medium/high. The suggested asset allocation was 90% in equities and 10% in corporate bonds.

On the basis of this, a requirement to provide a 50% spouse pension and annual pension increases in line with CPI increases capped at 2.5%, and with no lump sum taken, the modeller told me that I had a 6 out of 10 chance of getting a higher income from the transfer at retirement (in 10 years’ time at age 60). Taking out the spouse pension increased this to a 9 out of 10 chance. In fact, out of the high outcome, mid outcome and low outcome shown, only the low outcome led to a lower income from the transfer. The thick black line of certainty of the Scheme benefits was placed beside the alluring diamond of possibilities from the transfer (see diagram above). None of the financial assumptions or assumed cost of buying an annuity were spelt out. I decided this would benefit from further discussion and clicked to arrange an appointment. My slot for a telephone meeting with an adviser was quickly arranged and the afternoon arrived.

The adviser was very polite and unpushy. I explained my surprise at the outcome of the risk profiler, on the basis of which he agreed to reduce my profile risk level; from medium/high to medium.

He explained that Beaufort were not incentivised to get people to transfer and that the same offer was being made to everyone more than five years from retirement.

I asked him what assumptions had been made in the modeller. This took a while to get a response to, during which time I got an interesting account of a stochastic process (this is where you let the various outcomes be chosen randomly but according to an underlying probability distribution, then run the model lots of times to show the relative likelihood of different results. Throwing dice lots of times is a very simple stochastic process). I persisted, saying that the darker area in the middle of their diamond must be based on an average level assumed for investment returns and annuity rates. The response, after a moment when I thought he was going to put the phone down on me due to some noise on the line that I couldn’t hear, was that the assumptions were standard and he thought the low one was 5% pa. I felt that he was telling me all he knew about the modeller.

We moved on to what I thought of the strength of the Phoenix Group, what my preference was on death benefits, etc, before he ran a few modeller examples to illustrate how my income following the transfer would be greater until age 81 (all stochasticism had been abandoned at this stage).

I decided to move my adviser back onto risk. I said that, as my Pearl pension was about a third of my (non-state) total pension benefits, and all my other pensions were per force defined contribution (DC – see my previous post for explanation of defined contribution and defined benefit), it seemed a good idea to diversify my risks by keeping some in defined benefit form. If equity returns over the next 10 years were like those of the last 10, I might be very glad I had.

To his credit, he accepted my argument, and said that he would not recommend I transferred. I thanked him for his time and for a helpful discussion and checked that I would be receiving a final written report, which he confirmed.

I put down the phone and reflected on what had happened. I realised I had some concerns about the process:

  • The adviser had been courteous, and had not pushed me in any particular direction, but had been unable to provide any information to assess the plausibility of the modeller at the heart of the advice.
  • I had had to introduce the idea of the risk of having all my pension benefits in DC form.

In particular, after reading a fair volume of paperwork and spending the best part of an hour on the phone, I was, as a pensions actuary, unable to recreate (even approximately) the modeller calculations from the information provided. I awaited the written report with interest.

To be continued…

Conflicts of Disinterest

Would you rather have someone giving you advice to be independent or disinterested? The Oxford English Dictionary (OED) definitions suggest some crossover but ultimately quite distinct meanings for the two words:


  1. not influenced by considerations of personal advantage
  2. having or feeling no interest in something


  1. free from outside control; not subject to another’s authority
  2. not depending on another for livelihood or subsistence
  3. capable of thinking or acting for oneself; not influenced by others; impartial
  4. not connected with another or with each other; separate; not depending on something else for strength or effectiveness; free-standing

I would opt for a disinterested adviser rather than an independent one every time. After all, you don’t want an adviser who is not connected with another or with each other. Those are normally the reported attributes of someone who has just done something terrible. And requiring your adviser to be neither subject to another’s authority nor depending on another for livelihood or subsistence probably means restricting yourself to people working on their own with no clients.

In opting for disinterested as a better adjective for advisers to shoot for, I am excluding the second definition here (some will argue that this is uninterested in any case, but 20% of the usage of the word disinterested is in the uninterested sense). Although many people giving advice will find their interest in advising anyone ever again for the rest of their lives waning at times, most of them return to being interested after a few days away from it, particularly if they have just enjoyed a holiday benefiting from the freely dispensed advice of their nearest and dearest.

However the definition of disinterested only takes us so far. You could be not influenced by considerations of personal advantage and yet still not be working in someone else’s best interests.

The Actuaries’ Code states that a conflict of interests arises if a member’s duty to act in the best interests of any client conflicts with:

a) the member’s own interests (ie you would not be disinterested by the OED definition); or

b) an interest of the member’s firm; or

c) the interests of other clients (you can’t provide full-blooded no-holds-barred advice to a client if you are also advising a company who is trying to buy them, sell them, merge with them or has different interests within the same organisation).

Consideration b) of this list then introduces a requirement on actuaries to take reasonable steps to ensure that they are aware of any relevant interest, including income, of their firm. And with this awareness comes the same responsibility to deal with any conflict arising as a result. However the Code is very much aimed at individual actuaries rather than their firms.

The Law Society’s practice note on conflicts of interest takes a similar line, recognising two types of conflicts of interest: own interest conflict (which includes the lawyer’s own interests and those of the lawyer’s firm) and client conflict. However it goes further by making it clear that the note applies to individuals and to firms collectively. Conflicts of interest are also regulated by the Solicitors Regulation Authority (SRA) within an overall framework of regulation that has two elements: firm-based requirements and individual requirements. It focuses on the practices of regulated entities as well as the conduct and competence of regulated individuals. This approach allows the SRA to take regulatory action against firms or individuals, or both, in appropriate cases.

All of this is fine as far as it goes, but I wonder if a process that relies on individuals effectively acting as investigators within their own firms to dig up instances where either the spirit or letter of some code is infringed is ever going to prevent deeply embedded practices on its own. It is very difficult to call time on arrangements which are making people money, particularly when you are dependent on the people making the money for your job.

Perhaps another way to go (or an additional one, as in this case I don’t think there is a conflict involved!) would be to recognise the meaning of the Latin root of the word conflict, which is conflictus, meaning contest. Wouldn’t it be helpful to individuals trying to avoid conflicts of interest if the companies they worked for operated a conflict of disinterest? Where firms competed with each other to demonstrate how disinterested they were. Where firms felt it gave them a competitive advantage to show how the only thing they had at stake in taking on a client or a project or any other piece of work was the agreed fee.

For a firm actively engaging in a conflict of disinterest, the individuals working for it wouldn’t have to knock down several doors to raise their concerns, they would find they were regularly being asked about the status of potential conflicts of interest, in case they in turn were in conflict with the firm’s client agreements and promotional material. The markets clients worked in would be regularly scanned for intelligence on deals in the pipeline and the firm’s own client lists would be scrutinised for potential implications.

So how could such a conflict of disinterest be brought about? By campaigning for it. If this is how we want business to be done we need to ask for it. If a change in public expectations of corporate tax management practices can lead to significant changes in those practices, the same could be achieved on conflicts of interest.

Because currently they are everywhere. At one end is the chimney sweep who brought a pile of soot down onto my new carpet and then turned to me and told me not to worry as he also ran a carpet cleaning business. At the other are the ratings agencies, paid by the firms they are rating, who both give credit ratings on financial instruments and advise individual firms on how to construct those financial instruments so as to score the highest possible ratings, which ultimately contributed significantly to the market crash and subsequent economic recession we have still not recovered from.

So declare a conflict of disinterest today and let’s start a movement.



Uncertainty and its discontents

There’s certainly a great deal of uncertainty about.

In Nate Silver’s book, The Signal and the Noise, there is a chapter on climate change (which has come in for some criticism – see Michael Mann’s blog on this) which contains a diagram on uncertainty supposedly sketched for him on a cocktail napkin by Gavin Schmidt. It occurred to me that this pattern of uncertainty at different timescales was more generally applicable (it describes very well, for instance, the different types of uncertainty in any projections of future mortality rates). In particular, I think it provides a good framework for considering the current arguments about economic growth, debt and austerity. Some of these arguments look to be at cross-purposes because they are focused on different timeframes.

uncertaintyIn the short term, looking less than 10 years ahead, initial condition uncertainty dominates. This means that in the short term we do not really understand what is currently going on (all our knowledge is to some extent historical) and trends which might seem obvious in a few years are anything but now. Politics operates in this sphere (long term thinking tends to look two parliaments ahead at most, ie 10 years). However, the market traders who by their activities move the markets and market indices on which we tend to base our forecasts and our economic policies are also working in the short term, the very short term (ie less than 3 months to close off a position and be able to compare your performance with your peers), even if they are trading in long term investments.

So both the politics and economics is very short term in its focus, and this is therefore where the debate about growth and austerity tends to be waged. The Austerians (which include the UK Government) claim to believe that debt deters growth, and that cutting spending in real terms is the only possible Plan A policy option. The Keynesians believe that, in a recession, and when interest rates cannot go any lower, demand can only be revived by Government spending. This argument is now well rehearsed, and is in my view shifting towards the Keynesians, but in the meantime austerian policies (with all the economic destruction they inevitably cause) continue in the UK.

However, there are other groups seemingly supportive of the UK Government’s position in this argument for altogether different reasons. Nassim Nicholas Taleb argues that high levels of debt increase an economy’s fragility to the inevitable large devastating economic events which will happen in the future and which we cannot predict in advance. He therefore dismisses the Keynesians as fragilistas, ie people who transfer more fragility onto the rest of us by their influence on policy. These concerns are focused on the structural uncertainty which is always with us and is difficult to reduce significantly. It is therefore important to reduce (or, if possible, reverse) your fragility to it.

At the longer term end are the groups who believe that we need to restrict future economic growth voluntarily before it is done for us, catastrophically rapidly, by a planet whose limits in many areas may now be very close to being reached. They are therefore implacably opposed to any policy which aims to promote economic growth. These concerns are focused where there are many possible future scenarios (ie scenario uncertainty), some of which involve apocalyptic levels of resource depletion and land degradation.

These different groups are tackling different problems. I do not believe that those concerned with the structural fragility of the economy really believe that the people paying for the restructure should be the disabled or those on minimum wage. Similarly, there is a big difference between encouraging people to consume less and move to more sustainable lifestyles and recalibrating down what is meant by a subsistence level of existence for those already there.

We do need to worry about too big to fail. Our economy houses too many institutions which appear to be too large to regulate effectively. We do need to reduce levels of debt when economic activity has returned to more normal levels. We will need to restructure our economy entirely for it to make long-term sense in a world where long term limits to growth seem inevitable. But none of these are our immediate concern. We need to save the economy first.