One of the key tools used by fixed-income analysts during the Great Moderation was their beloved Monte Carlo simulation. They would take an instrument like a CPDO or a subprime-backed CDO, and they would run it through tens of thousands of possible future worlds. If it held up in all or nearly all of those worlds, then, presto, it got a triple-A credit rating.

But Welton Investment Corporation has a great little paper out showing just how unhelpful Monte Carlo simulations can be. They applied a Monte Carlo simulation to the S&P 500 over the past 50 years, plugging in its known return and variance. Using that, they compared the predicted number of large down quarters with the actual number of large down quarters. And got this:

Over the past 50 years, someone wielding a Monte Carlo simulation would expect 32 large down quarters, of more than 20%. In fact, there were 169. And they would expect no quarters at all with losses of 30% or more, when in fact there were 23. As for a loss of 40%, that simply never happens in a bell-shaped world. But it does — and did — in real life.

There’s nothing new here, as Welton notes:

It is worth establishing that the equity market’s tail risk signature is both well-known and persistent over time. Our analysis is not anomalous, and is easily replicated using any reasonably long period of historical market data. Second, it is also worth noting that this “tail risk” effect is not just confined to the S&P 500, nor is it confined to equities exclusively. Rather, this phenomenon is seen widely across capital markets and real assets.

But it’s also undeniable that a preponderance of stock-market investors don’t really grok how much risk they’re taking: they concentrate on the positive average returns, and tend to ignore the massive downside. Given enough time, some if not all of these losses can be made back. But just how much time do most investors have? And how much do they need, before investing “for the long run” starts to be a remotely safe thing to do?

(HT: Harris)

**Update**: My sharp-eyed readers had their coffee this morning; I clearly hadn’t. There’s something very, very odd with the numbers above: they seem to be cumulative, rather than additive. So while it might be true that there were 42 quarters with a loss of more than 20%, compared to a predicted 17, that’s a multiplier of 2.5x, not 5.3x. And it seems improbable, to say the least, that there can have been 169 quarters in the past 50 years with a loss of more than 20%, when 50 years is only 200 quarters in total. There must be a fair amount of mulitple-counting going on somewhere. On top of that, not all Monte Carlo simulations assume a normal distribution. So I have a call in to Welton, I’ll try and clear all this up.

*Update 2*: OK, I’ve now talked to Welton’s Chris Keenan, and have a much better idea of what’s going on here. First of all, these are rolling compound 65-day returns: the quarterly performance is calculated every day, not every quarter. So there aren’t 200 quarters in 50 years, there are about 11,000. Why would you do that? After all, who calculates their quarter-to-date performance on a daily basis? Very few people. But institutional clients, especially, pay serious attention to quarterly returns, which is one reason why they demand those numbers from money managers. And if you’re looking at stock market performance as a whole, it makes sense to get as many datapoints as possible, rather than cherry-picking 200 datapoints on the grounds of where they fall on the calendar.

In any case, Welton came up with this chart:

The basic idea is to measure how fat the real-world tail is, compared to the normal distribution. The real tail is the area under the light-blue line, left of a 20% cutoff. And that tail is 5.3x fatter — its area is 5.3x larger — than the area under the dark-blue line, which represents the normal distribution assumed in much of Modern Portfolio Theory, and which in turn is used to make a lot of asset allocation decisions. I hope that helps clear things up a little.

*Update 3*: I’m still getting pushback on my headline, and the way that I’m blaming Monte Carlo simulations rather than just normal distributions. That’s fair: the real underlying problem with the normal distribution is the normal distribution, and if you run a Monte Carlo simulation with normally-distributed garbage in, then you’re going to get garbage out. This paper didn’t *need* to use Monte Carlo simulations, but it did:

We created an “Expected” return distribution for the S&P 500 Index using standard Monte Carlo simulation methods based on a normal distribution assumption with inputs derived from actual S&P 500 data for the previous 50-years.

The inputs, here, were the return and the variance of the stock market over the long term. And using those inputs, along with a silly assumption that returns were normally distributed, ended up with a very bad model when it came to predicting the fatness of the left-hand tail.

It’s entirely possible to run Monte Carlo simulations which don’t assume a normal distribution. But the fact is that most people, when they look at the results of Monte Carlo tests, don’t critically examine the assumptions behind them. And it’s very easy to get blinded by Monte Carlo science: I, for one, took a lot of the CPDO fanboys at face value because I trusted them to be using good models, when in fact they were using flawed models.

So the lesson here, I think, is mostly that stock-market tails are fat. But there is a sub-lesson, too, which is that Monte Carlo simulations can be very dangerous, if they’re implemented by people who don’t know what they’re doing. Including the quants at Moody’s.

Of course there is principal, or else, what are you loaning the government (or whoever for that matter)? The principal is the original loan and it is used to compute the coupon payments.