## The Flaw of Averages

I’m not generally a fan of management books, maybe because I’m not a manager. So it’s probably just as well that I didn’t realize that *The Flaw of Averages*, by Sam Savage, was a management book before I started reading it. The highest praise I can give it is that I *finished* reading it — all the way through — which is something I don’t think I’ve ever done with a management book. Savage is a clear and gifted writer, which helps, and I’m interested in the subject matter, which also helps.

But there was something else which kept me reading: I was waiting for the other shoe to drop, and it never did. The basic thesis of *The Flaw of Averages* is not only true but mathematically provable: when you’re dealing with probability distributions rather than certainties, you can find yourself making all manner of horrible and costly errors if you try to boil those probability distributions down to a single number like an average. Instead, contemporary software, much of it based on Savage’s own research and development, allows you to create and manipulate those distributions directly, with much more useful results.

Savage advocates that companies create a new position, the Chief Probability Officer, charged with coordinating the institutional knowledge about probability distributions. He writes, in what might be the nub of the whole book:

Managers at many levels are just bursting with probabilistic knowledge, which if properly channeled can be put to good use.

Of course, the question of how to put the knowledge of lower-level managers to good use is not a question confined to probability distributions: really, it’s the central question of all management theory. But substantially all of this book deals with the question of how best to deal with probability distributions; there’s nothing at all on how to smell them to see if they make any sense, or how to judge how accurate they are.

Most startlingly of all, there’s no discussion of what probability *is*. One of my favorite parts of Riccardo Rebonato’s magnificent book *Plight of the Fortune Tellers* is chapter 3, “thinking about probabilities”. He makes the hugely important distinction: on the one hand there’s *frequentist* probability, where you can run the same experiment thousands of times to see what different results occur. On the other hand there’s *subjective* probability: if I ask what the probability is that oil will hit $100 per barrel in the next five years, you can’t do that.

Many people, Savage included, love to run Monte Carlo simulations in order to try to reduce subjective probability to frequentist probability, but there’s a category error going on whenever that happens, which is one reason that financial instruments designed by running Monte Carlo simulations blew up so spectacularly during this financial crisis. Monte Carlo simulations are very bad at showing the risk of something unprecedented happening, but as Nassim Taleb loves to point out, it’s the unprecedented events — the black swans — which tend to be crucially important.

On page 291 of his book, Savage prints an admittedly hypothetical distribution of future sea levels. He then goes on to explain why the distribution is oversimplified, and why we can’t trust it in its initial oversimplified form. But the fact is that his base-case scenario, the place where he starts his analysis, is a thin-tailed normal distribution, with the chance of sea levels rising in future being exactly the same as the chance that they will fall.

I just can’t believe that that kind of normal distribution is ever a useful place to start when thinking about something like climate change — the subject of the chapter at hand. The chances of sea levels falling from their current level are tiny — *much* lower than 50%. And the histogram going out is very bumpy indeed: to a first approximation, either Greenland and the west Antarctic ice sheet melt into the sea, or they don’t. Tails don’t *get* much fatter than this one.

But this whole book reads as though it was written in what Taleb calls “mediocristan” as opposed to the real world of “extremistan”. Tails are thin; Black and Scholes and Merton and Markowitz are heroes; probability distributions can be modeled and tamed and understood on a seat-of-the-pants level.

It’s true that the world of The Flaw of Averages is better than the world we’re just emerging from, where things like value-at-risk and correlation were disastrously boiled down to single numbers. But I’m still not sure I want to live in Savage’s world: it seems to me to be lacking a healthy dose of fear of the unknown. Quite the opposite, in fact: large chunks of the book are devoted to the riches that can be struck by identifying “real options” and buying them on the cheap from people who, looking only at averages, might overlook a lot of option value.

My fear is that if Savage’s souped-up Excel spreadsheets catch on, the corporate world will fall into the overconfidence trap which did for the financial world during the Great Moderation. Savage’s statistical distributions are extremely powerful tools, both in terms of identifying profitable opportunities and in terms of avoiding massive potential downside. But if companies become particularly adept at avoiding crashes, then that’s a recipe for yet another Minsky bubble. The fewer corporate disasters we see, the more risk and leverage that companies will feel comfortable taking on, and the more likely it is that another system-wide crash will occur.

Savage’s techniques are very good at discovering existing correlations which might not be immediately visible to senior management. But they’re useless at discovering correlations which were never significant in the past but which suddenly and terrifyingly go to 1 in the future when a Black Swan arrives.

If we all take Savage’s advice, we’ll weather most storms much better than we do right now. But I fear we’ll fare even worse in the event that a hurricane hits.

** Update**: Savage responds:

I believe in Black Swan thinking whole heartedly, but have been amazed to discover that most of my students (both university and executive) don’t even grasp the concept of a distribution in the first place. I also agree that any technology can lull people into a sense of security, but distributions of any kind are not as bad in this regard as single numbers. And my hope is that shaking the ladder in any manner will encourage people to stop fixating on the right answer, and start thinking about the right question, which is the proper defense against the black swan.

Actually one of my favorite interactive simulation demonstrations is black swanish, and sharply contrasts the Right Answer and Right Question schools. I didn’t think I could do it convincingly in the book because it is like writing about what it feels like to ride a bicycle, but I will try here. It involves picking a portfolio of petroleum prospects (like the Shell model), where one of the projects (site A) is a very attractive natural gas field, but is in a politically unstable part of the world, and there is a chance it could blow up politically.

Right Answer approach for dealing with the board of directors:Ladies and gentlemen, we need to estimate the probability of an overthrow at our favorite site A, so we can chose an optimal portfolio that protects us in this event.

A committee is formed and after months of discussion it arrives at a 15% probability. Yes, there is a reasonable chance the place will blow up (lets call it a grey swan), but It is ridiculous to think you could estimate the probability with the accuracy implied by “15%” This analysis would rightfully deserve the wrath of Taleb.

“Right Question” approach:We plug probabilities of overthrow ranging from 0 to 100% into an interactive simulation. As soon as a probability is plugged in, one thousand trials are run for each of 100 potential portfolios, nearly instantaneously. As we do this we observe the shape of the galaxy of portfolios in risk return space being deformed by probabilistic forces. We also notice that for all probabilities ranging between 3% and 97% that a few portfolios stay on the efficient frontier. These portfolios all contain both site A, and a less attractive site B, which is an alternate supply to the same market. Thus if A blows up, the price of gas goes up and B becomes a gold mine. This leads to the right question for the board of directors.

Ladies and gentlemen, do we hedge site A with site B? We are having an up or down vote in five minutes.

Well now you can see why I didn’t write about it, but if you ever have time for a webex, I find the demonstration dramatic, because I had no idea that the hedge of A with B would be optimal for such a huge range of probabilities.

The Halo Effect by Rosenzweig is a great management book that’s not really a management book either. If you haven’t already read it, give it a go. If you like Taleb’s stuff and this book you’ll probably like The Halo Effect.

http://en.wikipedia.org/wiki/Variance

“Whereas the mean is a way to describe the location of a distribution, the variance is a way to capture its scale or degree of being spread out.”

Savage would be the first to tell you that his tool is primarily for closed systems dice, cards, etc…, not open systems like the market. It’s a very clever tool for closed systems. But it can be adapted for the fat tails most of us have known about for around half a century. Taleb seems to have just discovered it. Another feature of open systems as many including Tavakoli pointed out in her credit derivatives book, is that none of the models capture risks that only management can deal with. Information asymmetry is just one example.

As far as I can tell, the mean has one useful property– given a set of measurements of some quantity, the mean of that set of measurements is the unbiased estimate of the quantity’s actual value. For anything else, caveat computor.

Yes, and yes. This post demonstrates the power of being able to think about mathematics rather than arithmetic.

I am not a mathematician, and do not fully understand the technicalities of the arguments. Does this make me more or less like the standard board member? Because of the cogent thinking and writing, I am able to understand and appreciate both Felix’ critique of Savage’s book, and Savage’s well-done response.

Were I a board member faced with the decision outlined in Savage’s example, I would be more comfortable making the decision to hedge site A with site B, simply because I would feel far more informed about the actual risks presented, as well as infinitely more confident in the management team’s ability to manage those risks…

Average is a measure of just the central tendency. There are other measures which summarize a distribution. Variance, skewness, and kurtosis are the measures of other characteristics….. have you heard of them Savage? Who looks at only the averages and takes decisions btw?

I have not only heard of variance, skewness, and kurtosis, but refer to them as RED WORDS. I define these as terms left over from the steam era that cannot be uttered in a singles bar. They are often used by statisticians to blow smoke. Instead I recommend using the entire distribution, stored in something called a Distribution String or DIST. It consists of 1000 Monte Carlo trials stuffed, like a genie in a bottle, into the single cell of a spreadsheet. Every time you make a change something, all thousand trials rip through your model like bullets through a machine gun. Check out the demo models at ProbabilityManagement.org.

I find these types of discussions extremely frustrating because I am a manager. They are a sort of quant porn, if you ask me.

(P.S. Felix, I don’t read management books either.)

The probability (but not the value, and don’t get me started on expected value) of events that are the result of human behavior are impossible, in my mind, to pin down with mathematical precision, Freakonomics not withstanding. I read this and think, “angels dancing on the head of a pin…” Which reminds me of one of my favorite jokes:

How do you make God laugh?

Tell him your plans.

As to angels on the head of a pin, the example I use is discussions of how to ride a bicycle. This is absurd. You don’t learn to ride a bike by talking about it but by riding one. That’s why we need models; to express things that are otherwise meaningless, and connect the seats of our intellects to the seats of our pants.

I think an area outside of business decisions that could benifit from a distribution analysis instead of a mean would be life expectancy. It’s often pointed out that the US ranks 50th in life expectancy, but the CIA factbook this number comes from doesn’t list a variance, and it’s accurate to a hundreth of a year (which is ~3 days). While the algebra behind this average is certainly calculated accurate, using such a precise number for so broad a group is rediculous. Comparing the distributions (along with statements like “78% of the population will survive past 50″) would be much more educational.

Well, isn’t the hedging of site A with site B just something a good oil executive could tell you? Since site A is so potentially unstable, just put a minimal amount of cash into it. If it goes great you still benefit, if it goes badly, site B compensates.

Ross Perot keeps most of his investments in safe investments, any risky investment he wants to get involved in, he uses cash from the dividends of his safe investments for.

Sam Savage’s book is great. I work with a bunch of very intelligent, highly educated people in a very quantitative field (investment management) – and yet, for all the sophisticated analysis we apply – the flaw of averages lurks everywhere and crops up in the most seemingly benign fashion and impacts decision-making all the time. Fixating on ‘black swans’ is all the rage these days, but let’s not forget to tie our shoe-laces.