How economists get tripped up by statistics

By Felix Salmon
July 10, 2012

scatter.tiff

Look at this scatter chart. There will be a quiz. Another dot is going to be added to this chart, in line with the distribution you see here. You get to choose what the X value of the dot is — and your aim is to get a Y value of greater than zero. So here’s the question: at what value of X are you going to have a 95% chance of getting a dot above the axis, in positive territory on the Y axis?

Emre Soyer and Robin Hogarth of the Universitat Pompeu Fabra, in Barcelona, recently asked a group of economists that question — all of them were faculty members in economics departments at leading universities worldwide. There’s a right answer: it’s 47. And there’s a spectacularly wrong answer: anything less than 10. The economists being asked the question are smart, highly-educated people who are intimately familiar with regression analyses. And it turns out that as a group, they did very well: just 3% got the question very wrong.

But the economists also like being precise. They don’t like eyeballing answers: they like to be certain. And so, Soyer and Hogarth write:

Most of the participants, including some who made the most accurate predictions, protested in their comments about the insufficiency of information provided for the task. They claimed that, without the coefficient estimates, it was impossible to determine the answers and that all they did was to “guess” the outcomes approximately.

This is fair enough. So Soyer and Hogarth found some other economists — chosen randomly in exactly the same way. And they presented those economists with all the coefficients and data they could want, just as it would be presented in an academic paper. It looked like this:

prose.tiff

Everything’s there — the formula for the random perturbation, the means and standard deviations for both variables, the OLS fit, the lot. With all this information to hand, economists can be much more accurate when being asked to do something like work out a value for X such that there’s a 95% chance that Y will be greater than zero.

But here’s the thing: when the economists were shown both the graph and the detailed numbers, the number of economists getting the answer spectacularly wrong — the number giving an answer of less than 10 — soared. Just working with their eyeballs, 3% of economists got it wrong. Working with the numbers as well, that proportion rose to 61%! And when a third group was given the numbers and no chart at all, fully 72% of them — professional economists all — got the answer badly wrong.

What the authors conclude is that economists tend to overstretch when they read academic papers — they think that papers show much more than in fact they do. And the more academic papers that economists read, the more misguided they’ll become:

By reading journals in economics they will necessarily acquire a false impression of what knowledge gained from economic research allows one to say. In short, they will believe that economic outputs are far more predictable than is in fact the case.

We make all of the above statements assuming that econometric models describe empirical phenomena appropriately. In reality, such models might suffer from a variety of problems associated with the omission of key variables, measurement error, multicollinearity, or estimating future values of predictors. It can only be shown that model assumptions are at best approximately satisfied (they are not “rejected” by the data)… There is also evidence that statistical significance is often wrongly associated with replicability.

I’m certainly guilty of this kind of thing: I see a paper demonstrating a statistically significant correlation between one variable and another, and I generally assume that if the experiment were repeated, we’d see the same thing again. But that’s not actually true.

And so it’s easy to see, I think, how economists become convinced of things that the rest of us aren’t sure of at all — and how the economists often end up being wrong, while the rest of us were right to be dubious.

What’s more, if economists are bad at this kind of thing, just imagine what other social scientists are like, or even doctors. Next time you see a piece of pop-science talking about interesting findings from some paper or other, bear this in mind. A lot of papers are written; a few of them have interesting findings. Those are the papers which tend to get publicity. But there’s also a very good chance that they don’t actually show what the headlines say that they show.

(Via Dave Levine. And please, don’t get me started on all the meta-implications of this post; suffice to say I’m fully aware of them.)

30 comments

Comments are closed.