Banks, regulators need reality check on risk math
The credit crisis would not have been as bad if investment banks’ risk management systems worked well. But the systems rely on sophisticated mathematical models that have a fundamental flaw — they grossly underestimate “tail risk”. This problem can be solved fairly easily.
In a way, this is a higher technical dispute, about the arcane details of the calculation of Value at Risk (VaR), the prime measure of the riskiness of trading books. To non-mathematicians, the possible answers sound daunting: Gaussian, Cauchy, and Pareto-Levy. But the underlying question is straightforward: how often and how badly do markets blow up?
On a day to day basis, financial asset prices seem to go up and down at random, or in response to the news. But the rocket scientists of finance analyze thousands of movements to identify patterns. The answer is what is known as a stable Paretian distribution. Picture a curve that looks like a cross-section of a hat, with a brim that is wide and thin and a great big crown in the middle.
The crown represents the days that prices don’t move very much. It is high because most days are like that. On those calm days, holders can’t lose much money. The brim represents the days of big losses or gains. The further from the center of the crown, the bigger the loss or gain. The wider the brim, the more often the big moves occur.
The argument is about the shape of the hat-brim. The standard model for banks is Gaussian, in which really bad days happen rarely and horrible days almost never happen. How rarely?
Well, in August 2007 David Viniar, the chief financial officer of Goldman Sachs, said “We were seeing things that were 25-standard deviation moves, several days in a row.” Standard deviations are a measure of the distance from the center of the hat, or, to use the common image in the trade, the length of the tail. If those days were really 25 standard deviations away, they would not be expected to come around in the lifetime of a billion universes.
But the 2007-style meltdown had lots of precedents in the last few centuries, well within the life of this one universe.
The Gaussian model is too optimistic about market stability, because it uses an unrealistically high number for the key variable, the exponential rate of decay, known to its friends as alpha (not the alpha of performance measurement).
Gauss is at 2. If markets worked with an alpha of zero, known as the Cauchy distribution, the 2007 days would come around every 2.5 months. That is unrealistic in the other direction.
In 1962, the mathematician Benoit Mandelbrot demonstrated that an alpha of 1.7 provided the best fit with a 100-year series of cotton prices. More recent market history — the 1987 crash, the LTCM debacle and the 2007-08 meltdown — suggest big bad events occur about once a decade.
That goes better with an alpha of 0.5, the Pareto-Levy distribution. This is the model used by the Options Clearing Corporation (OCC) to assess option trading counterparty risk for margin purposes.
The OCC has no incentive to allow counterparty risk to build up. But for investment banks, more conservative measures of the chance of big market drops would reduce returns on capital, because they would have to put more capital aside to protect against the possibility.
The lure of maximising trading positions, profits and bonuses in non-crash years could well distort the experts’ judgement. Indeed, one way to look at the exotic financial instruments which have proliferated in recent years is as a sort of statistical arbitrage. If alpha was calculated correctly, the tails for portfolios of complex derivatives and the like would be fat and long — more gains in the good times and bigger losses in the bad — and more capital would be needed. But the measure that is actually used, the Gaussian alpha, hides the actual risk.
Regulators should get a better handle on that risk, but using less Gauss and more Pareto-Levy. That would reduce the chance that a pretty predictable market blow-up wrecks the entire financial system.