## The Flaw of Averages

*finished*reading it -- all the way through -- which is something I don't think I've ever done with a management book. Savage is a clear and gifted writer, which helps, and I'm interested in the subject matter, which also helps. " data-share-img="" data-share="twitter,facebook,linkedin,reddit,google" data-share-count="false">

I’m not generally a fan of management books, maybe because I’m not a manager. So it’s probably just as well that I didn’t realize that *The Flaw of Averages*, by Sam Savage, was a management book before I started reading it. The highest praise I can give it is that I *finished* reading it — all the way through — which is something I don’t think I’ve ever done with a management book. Savage is a clear and gifted writer, which helps, and I’m interested in the subject matter, which also helps.

But there was something else which kept me reading: I was waiting for the other shoe to drop, and it never did. The basic thesis of *The Flaw of Averages* is not only true but mathematically provable: when you’re dealing with probability distributions rather than certainties, you can find yourself making all manner of horrible and costly errors if you try to boil those probability distributions down to a single number like an average. Instead, contemporary software, much of it based on Savage’s own research and development, allows you to create and manipulate those distributions directly, with much more useful results.

Savage advocates that companies create a new position, the Chief Probability Officer, charged with coordinating the institutional knowledge about probability distributions. He writes, in what might be the nub of the whole book:

Managers at many levels are just bursting with probabilistic knowledge, which if properly channeled can be put to good use.

Of course, the question of how to put the knowledge of lower-level managers to good use is not a question confined to probability distributions: really, it’s the central question of all management theory. But substantially all of this book deals with the question of how best to deal with probability distributions; there’s nothing at all on how to smell them to see if they make any sense, or how to judge how accurate they are.

Most startlingly of all, there’s no discussion of what probability *is*. One of my favorite parts of Riccardo Rebonato’s magnificent book *Plight of the Fortune Tellers* is chapter 3, “thinking about probabilities”. He makes the hugely important distinction: on the one hand there’s *frequentist* probability, where you can run the same experiment thousands of times to see what different results occur. On the other hand there’s *subjective* probability: if I ask what the probability is that oil will hit $100 per barrel in the next five years, you can’t do that.

Many people, Savage included, love to run Monte Carlo simulations in order to try to reduce subjective probability to frequentist probability, but there’s a category error going on whenever that happens, which is one reason that financial instruments designed by running Monte Carlo simulations blew up so spectacularly during this financial crisis. Monte Carlo simulations are very bad at showing the risk of something unprecedented happening, but as Nassim Taleb loves to point out, it’s the unprecedented events — the black swans — which tend to be crucially important.

On page 291 of his book, Savage prints an admittedly hypothetical distribution of future sea levels. He then goes on to explain why the distribution is oversimplified, and why we can’t trust it in its initial oversimplified form. But the fact is that his base-case scenario, the place where he starts his analysis, is a thin-tailed normal distribution, with the chance of sea levels rising in future being exactly the same as the chance that they will fall.

I just can’t believe that that kind of normal distribution is ever a useful place to start when thinking about something like climate change — the subject of the chapter at hand. The chances of sea levels falling from their current level are tiny — *much* lower than 50%. And the histogram going out is very bumpy indeed: to a first approximation, either Greenland and the west Antarctic ice sheet melt into the sea, or they don’t. Tails don’t *get* much fatter than this one.

But this whole book reads as though it was written in what Taleb calls “mediocristan” as opposed to the real world of “extremistan”. Tails are thin; Black and Scholes and Merton and Markowitz are heroes; probability distributions can be modeled and tamed and understood on a seat-of-the-pants level.

It’s true that the world of The Flaw of Averages is better than the world we’re just emerging from, where things like value-at-risk and correlation were disastrously boiled down to single numbers. But I’m still not sure I want to live in Savage’s world: it seems to me to be lacking a healthy dose of fear of the unknown. Quite the opposite, in fact: large chunks of the book are devoted to the riches that can be struck by identifying “real options” and buying them on the cheap from people who, looking only at averages, might overlook a lot of option value.

My fear is that if Savage’s souped-up Excel spreadsheets catch on, the corporate world will fall into the overconfidence trap which did for the financial world during the Great Moderation. Savage’s statistical distributions are extremely powerful tools, both in terms of identifying profitable opportunities and in terms of avoiding massive potential downside. But if companies become particularly adept at avoiding crashes, then that’s a recipe for yet another Minsky bubble. The fewer corporate disasters we see, the more risk and leverage that companies will feel comfortable taking on, and the more likely it is that another system-wide crash will occur.

Savage’s techniques are very good at discovering existing correlations which might not be immediately visible to senior management. But they’re useless at discovering correlations which were never significant in the past but which suddenly and terrifyingly go to 1 in the future when a Black Swan arrives.

If we all take Savage’s advice, we’ll weather most storms much better than we do right now. But I fear we’ll fare even worse in the event that a hurricane hits.

** Update**: Savage responds:

I believe in Black Swan thinking whole heartedly, but have been amazed to discover that most of my students (both university and executive) don’t even grasp the concept of a distribution in the first place. I also agree that any technology can lull people into a sense of security, but distributions of any kind are not as bad in this regard as single numbers. And my hope is that shaking the ladder in any manner will encourage people to stop fixating on the right answer, and start thinking about the right question, which is the proper defense against the black swan.

Actually one of my favorite interactive simulation demonstrations is black swanish, and sharply contrasts the Right Answer and Right Question schools. I didn’t think I could do it convincingly in the book because it is like writing about what it feels like to ride a bicycle, but I will try here. It involves picking a portfolio of petroleum prospects (like the Shell model), where one of the projects (site A) is a very attractive natural gas field, but is in a politically unstable part of the world, and there is a chance it could blow up politically.

Right Answer approach for dealing with the board of directors:Ladies and gentlemen, we need to estimate the probability of an overthrow at our favorite site A, so we can chose an optimal portfolio that protects us in this event.

A committee is formed and after months of discussion it arrives at a 15% probability. Yes, there is a reasonable chance the place will blow up (lets call it a grey swan), but It is ridiculous to think you could estimate the probability with the accuracy implied by “15%” This analysis would rightfully deserve the wrath of Taleb.

“Right Question” approach:We plug probabilities of overthrow ranging from 0 to 100% into an interactive simulation. As soon as a probability is plugged in, one thousand trials are run for each of 100 potential portfolios, nearly instantaneously. As we do this we observe the shape of the galaxy of portfolios in risk return space being deformed by probabilistic forces. We also notice that for all probabilities ranging between 3% and 97% that a few portfolios stay on the efficient frontier. These portfolios all contain both site A, and a less attractive site B, which is an alternate supply to the same market. Thus if A blows up, the price of gas goes up and B becomes a gold mine. This leads to the right question for the board of directors.

Ladies and gentlemen, do we hedge site A with site B? We are having an up or down vote in five minutes.

Well now you can see why I didn’t write about it, but if you ever have time for a webex, I find the demonstration dramatic, because I had no idea that the hedge of A with B would be optimal for such a huge range of probabilities.