## Why going to Monte Carlo loses you money

One of the key tools used by fixed-income analysts during the Great Moderation was their beloved Monte Carlo simulation. They would take an instrument like a CPDO or a subprime-backed CDO, and they would run it through tens of thousands of possible future worlds. If it held up in all or nearly all of those worlds, then, presto, it got a triple-A credit rating.

But Welton Investment Corporation has a great little paper out showing just how unhelpful Monte Carlo simulations can be. They applied a Monte Carlo simulation to the S&P 500 over the past 50 years, plugging in its known return and variance. Using that, they compared the predicted number of large down quarters with the actual number of large down quarters. And got this:

Over the past 50 years, someone wielding a Monte Carlo simulation would expect 32 large down quarters, of more than 20%. In fact, there were 169. And they would expect no quarters at all with losses of 30% or more, when in fact there were 23. As for a loss of 40%, that simply never happens in a bell-shaped world. But it does — and did — in real life.

There’s nothing new here, as Welton notes:

It is worth establishing that the equity market’s tail risk signature is both well-known and persistent over time. Our analysis is not anomalous, and is easily replicated using any reasonably long period of historical market data. Second, it is also worth noting that this “tail risk” effect is not just confined to the S&P 500, nor is it confined to equities exclusively. Rather, this phenomenon is seen widely across capital markets and real assets.

But it’s also undeniable that a preponderance of stock-market investors don’t really grok how much risk they’re taking: they concentrate on the positive average returns, and tend to ignore the massive downside. Given enough time, some if not all of these losses can be made back. But just how much time do most investors have? And how much do they need, before investing “for the long run” starts to be a remotely safe thing to do?

(HT: Harris)

* Update*: My sharp-eyed readers had their coffee this morning; I clearly hadn’t. There’s something very, very odd with the numbers above: they seem to be cumulative, rather than additive. So while it might be true that there were 42 quarters with a loss of more than 20%, compared to a predicted 17, that’s a multiplier of 2.5x, not 5.3x. And it seems improbable, to say the least, that there can have been 169 quarters in the past 50 years with a loss of more than 20%, when 50 years is only 200 quarters in total. There must be a fair amount of mulitple-counting going on somewhere. On top of that, not all Monte Carlo simulations assume a normal distribution. So I have a call in to Welton, I’ll try and clear all this up.

** Update 2**: OK, I’ve now talked to Welton’s Chris Keenan, and have a much better idea of what’s going on here. First of all, these are rolling compound 65-day returns: the quarterly performance is calculated every day, not every quarter. So there aren’t 200 quarters in 50 years, there are about 11,000. Why would you do that? After all, who calculates their quarter-to-date performance on a daily basis? Very few people. But institutional clients, especially, pay serious attention to quarterly returns, which is one reason why they demand those numbers from money managers. And if you’re looking at stock market performance as a whole, it makes sense to get as many datapoints as possible, rather than cherry-picking 200 datapoints on the grounds of where they fall on the calendar.

In any case, Welton came up with this chart:

The basic idea is to measure how fat the real-world tail is, compared to the normal distribution. The real tail is the area under the light-blue line, left of a 20% cutoff. And that tail is 5.3x fatter — its area is 5.3x larger — than the area under the dark-blue line, which represents the normal distribution assumed in much of Modern Portfolio Theory, and which in turn is used to make a lot of asset allocation decisions. I hope that helps clear things up a little.

** Update 3**: I’m still getting pushback on my headline, and the way that I’m blaming Monte Carlo simulations rather than just normal distributions. That’s fair: the real underlying problem with the normal distribution is the normal distribution, and if you run a Monte Carlo simulation with normally-distributed garbage in, then you’re going to get garbage out. This paper didn’t

*need*to use Monte Carlo simulations, but it did:

We created an “Expected” return distribution for the S&P 500 Index using standard Monte Carlo simulation methods based on a normal distribution assumption with inputs derived from actual S&P 500 data for the previous 50-years.

The inputs, here, were the return and the variance of the stock market over the long term. And using those inputs, along with a silly assumption that returns were normally distributed, ended up with a very bad model when it came to predicting the fatness of the left-hand tail.

It’s entirely possible to run Monte Carlo simulations which don’t assume a normal distribution. But the fact is that most people, when they look at the results of Monte Carlo tests, don’t critically examine the assumptions behind them. And it’s very easy to get blinded by Monte Carlo science: I, for one, took a lot of the CPDO fanboys at face value because I trusted them to be using good models, when in fact they were using flawed models.

So the lesson here, I think, is mostly that stock-market tails are fat. But there is a sub-lesson, too, which is that Monte Carlo simulations can be very dangerous, if they’re implemented by people who don’t know what they’re doing. Including the quants at Moody’s.

This isn’t a problem with Monte Carlo simulations, though, this is a problem with the normality assumption. If they had programmed their Monte Carlo to run using a non-normal distribution they would have gotten very different results. Monte Carlos are actually better in this regard than most statistical tools because the assumptions are explicit – you have to choose the distribution you are modeling – and not implied.

I agree with Ulysses, this post is extremely misleading regarding the usefulness of Monte Carlo simulations, which allow you to make your tails as fat as you like. It’s surprising that you didn’t mention that your criticism hinges on the choice of the normal distribution.

Incredibly misleading post.

As far as I can tell they are assuming a normal distribution for returns (rather than, say, lognormal). And I don’t think you would even need a Monte Carlo simulation to make the point in the paper.

Ulysses is correct. One can use Monte Carlo with historical distributions and get very reasonable results, if history is comparable to the future. I’ve done it.

Now quasi-monte carlo is even better, but that’s another matter.

Am I reading this wrong? In the last 50 yrs, 200 quarters, there were 169 down quarters? there were only 31 that posted better than -20%? What?

Not seeing this right, help me out…

Wow, are you channeling Gretchen Morgenson today, or what?

Monte carlo simulation is just a particular method of numerical integration. So whenever you are contemplating issuing a pronouncement about “monte carlo”, try substituting the words “numerical integration” instead. It gives a pretty accurate idea of how absurd your summary sounds.

Monte carlo has its limitations, like any technique. But in this case, the results are all about the integrand, not the integration. I am not about to waste my time reading the paper, but it is obvious that Welton specified a lognormal model of returns. In short, they have demanded skinny tails by fiat and are shocked, shocked! when their simulation tells them quite accurately that they are morons.

But wait! They are even stupider than they seem, because the lognormal distribution is quite tractable and they could quite easily have calculated the quantiles they report in closed form. They could have got *exactly* wrong answers, with no simulation noise! The monte carlo is just window dressing here.

something doesn’t add up. 169/200 quarters down more than 20%?

Ulysses and others are correct. This article, and the paper it refers to, are naive and the headline is simply meaningless. The whole point of Monte Carlo simulations is that they allow you to make very general distributional assumptions, but the paper doesn’t do that. If you’re going to assume a normal distribution on a single variable, then there’s no need to do a Monte Carlo simulation to determine the number of tail events! You can read them off a table of normal variates. Yes, S&P returns are not normally distributed and have fat tails. This has been well-known for decades. Monte Carlo simulations, correctly applied, can make reasonably accurate predictions in many cases, certainly better than that shown here. The Welton paper shows that if you make a common undergraduate error you get a terrible prediction. Gee thanks, well done. It says nothing about the effectiveness of Monte Carlo as a method in general.

I assume everyone has by now taken the time to read the Welton paper, except maybe Greycap. I agree with Felix, it is a a great little paper. Long live random walks (Monte Carlo), fat tails and “the results are only as good as the assumptions”.

I did read their paper, and everyone’s assumptions are right. They assume a normal distribution estimated using historical returns.

Given that, as others have said, they don’t even need a Monte Carlo simulator. There is an analytical / closed-form solution.

Actually, the paper does establish one thing. The Welton Investment Corporation should be avoided. They reveal themselves to be either extremely unsophisticated or dishonest. Or both.

I am almost happy with Mr. Salmon for passing along this tripe. With a couple of exceptions, all the peer reviewers here issued the correct ‘reject’ call. This is not a revise-and-resubmit situation, this analysis is broken fishwrap.

I was going to suggest that the actual numbers in the first column were cumulative, which might make some sense (42 total 20%+ down quarters rather than 169, though even that’s a lot), but not even that works, because as you go up the column, some values are less than the ones before them.

Curmudgeon — you’re right, the column isn’t cumulative. With any luck my second update will help to clear things up a bit.

Part of the problem is that the market goes through periods where normal distributions don’t apply – the Monte Carlo simulations, while valid for the periods of time when they do, just aren’t appropriate when order doesn’t exist in the markets.

Although, if you wanted a relatively prudent investing strategy, perhaps adapting your retirement or investment plan to withstand an absolute worst case scenario might be the way to go, with the idea that your actual results will likely be far better than those for which you planned.

Accepting the challenge of bob33, I did actually look at the linked material. Bob is not quite Voltaire material, for though not a paper and not great, the document is mercifully brief. The summary: it is a sales pitch for the investment services offered by Welton, lurking behind door number 3, “harness”. Checking their website, I see that “Welton is an alternative investment manager specializing in managed futures and global macro strategies.” What a surprise. Say, you don’t work for Welton, do you Bob?

The 5.3 number is pure voodoo, as others have noted. It is shock and awe intended to generate sales.

However, Welton is innocent of the misunderstanding about monte carlo; that is all Felix’s fault.

With regard to a post a couple of months ago by Felix, this is one reason why financial journalists need to have a basic understanding of the underlying math. Being a reporter and simply parroting what economists say is honorable enough work, but if you want to critically evaluate those pronouncements, you need to speak the language.

@Greycap “Welton is innocent of the misunderstanding about monte carlo; that is all Felix’s fault.”

True, but do check out Felix’s Update 3 above.

a total disservice to the many professionals who use MCS and know what they are doing.

Most of the comments about Monte Carlos are spot on, especially the first one. But, one post is clearly incorrect: Monte Carlos do not assume normal distribution. I, for example, us a neutron/photon transport Monte Carlo that assumes nothing but the physics underlying the transport. It very accurately predicts transport in very complex geometry.

Now, we know the physics underlying the transport, we know far less about economics. We do know some things. What Monte Carlo’s allow us to do is determine results from a given set of inputs. If the inputs assumptions are wrong, then the results will be wrong. But, the advantage of Monte Carlos is that we need not make a simplifying assumption just to use the technique than one relying on false assumptions. A good example of this is the value in in hindcasting as a means of training hurricane models.

Ecconomic predictions will always have uncertainties. But, Monte Carlo techniques can utilize every scrap of informaton that is available. One can look at all sorts of patterns, and try to hindcast them with assumptions. That doesn’t guarantee predictive power, but one has a better chance with a model that hindcasts near perfectly than with one that is based on false simplifying assumptions, like normal distributions.

I don’t know the author, but I’d guess that the author is not well versed in modeling.

In his autobiography Stanislaw Ulam who came up with the Monte Carlo Method was rather dismissive of it. He attributed the method’s popularity to the name he gave it.

Anyone thinking of investing with the authors of this paper should thank Mr Salmon for demonstrating what utter frauds they are. It is a slightly less clever version of what Taleb does – make trivial claims dressed up as “clever” showing that those guys who seemed so much smarter were actually idiots. Of course Taleb is smart enough to be vague about what he is claiming because then you can’t actually nail him down.

By the way, you have any proof that Moody’s were assuming lognormal distributions in their models?

Let’s not lose sight of the facts here. Regardless of the “Monte Carlo” debate…it is clear from recent events in the financial markets that the “science” of predictive modeling in the realm of economics and financial markets/assets, has a long way to go. In the field, there are simply too many “unknowns” that are still unknown. The “discrepencies” in the data between “predictive” and “actual” make an obvious statement as to the degree to which this is demonstrably true. This is why, many of us, who use statistical-based analyses/methodologies focus on relative or “comparative” analyses between variables as opposed to “absolute outcomes” or “predictions.”

I’m not yet going to call myself an “advocate” of his, but I’ve been learning a lot from James Otar, http://www.retirementoptimizer.com/, and he seems to have some valuable ideas. He rejects Monte Carlo simulation entirely in favor of using historical scenarios based on the last hundred years or so. That is, he transports you and your savings to the year 1900, then the year 1901, 1902, and so on, and evaluates strategies accordingly.

There are of course criticisms that can be leveled, but neglecting tail risk does not seem to be one of them.

Oh dear. Worse and worse.

Suppose I asked you to guess what the historical quarterly return measuring back from a random date in the last 50 years. You would answer some number. Now if I told you that the historical return for the quarter starting one day before the chosen date was -40%, would your answer change? This effect is called serial autocorrelation. Even if you believe that quarterly returns from non-overlapping periods are drawn independently from the same normal distribution, returns from overlapping periods are obviously strongly correlated.

Now, treat these rolling returns as sample draws from a distribution and measure the sample mean and variance. If you take independent draws from a normal distribution with this mean and variance, will the sample distribution converge to the historical distribution? No it will not – not even if the real historical distribution over non-overlapping periods was a stationary normal distribution.

It is a bit of work to set up a monte carlo that correctly simulates overlapping returns given the assumption that non-overlapping returns are stationary and normal. Has Welton done this work? There is no way of telling from the sales pitch. But given that they have a financial incentive not to do so, I am skeptical.

Here is a question. What is the best model of stock markets out there? The data presented here is only for 50 years and for one country.

They could have simply said “As everyone knows, stock distributions have fatter tails than the normal distribution”. But that would have made for a very short article, no alarming claims (since it is hardly news), and not much of a basis for marketing their services.

We agree the most important thing with Monte Carlo simulations is actually the relevance of the stochastic model and the assumptions it is made of.

By the way, I developed a new risk analysis tool called Statscorer which allows do to Monte Carlo simulations within Excel and in-depth stochastic modeling, while remaining very simple.

I will be interested to know your opinion since it’s a bit different from its competitors: you can correlate directly input variables with formal expression (e.g.: X3=Exp(X1)+ln(X2^2+1), …), export raw data in a text file and other good stuffs.

You can download a 15-day evaluation version freely (no personal information required). Also I will give a free 3-month subscription to the readers of this blog (this offer runs until march 2011 ;-). Just send me an email to support@statscorer.com mentioning you are a reader of Reuter’s blog.

You can visit http://www.statscorer.com to get many detailed examples of how to create stochastic models in Excel with Statscorer. Finally, I will be glad to help you to define your finest financial model.