How good are economists at forecasting CPI?
Market economists are taking a pasting worldwide for not predicting the global financial crisis. But how good is the profession at more bread-and-butter tasks, such as forecasting economic data?
In Australia, Reuters surveys 15-25 economists ahead of each quarterly CPI figure. A check back over analyst forecasts for the past 17 years shows:
- the median forecast mostly gets the direction right, but tends to miss the highs and lows of the cycle
- the median forecast is pretty close about half the time
- but about a quarter of the time it’s well off the mark
- and of those — about 10 percent of the time — it’s not even close
Forecasts matter because financial markets closely watch surveys of analyst expectations for major data, and the consensus forecast is priced into the market well before official figures are released. So any big swings in the exchange rate or bill prices on the day are usually due to whether the result matches expectations, rather than the figure itself.
Comparing the median forecast with the actual outcome produces a table that looks like this.
* The spike in Q3 2000 reflects the introduction of a Goods and Services Tax. The quarterly CPI rose 3.7 percent, compared with a forecast 4.2 percent increase.
Excluding Q3, 2000, the quarterly CPI outcome over the 17 years from Q1 1992 has ranged from -0.4 to 1.7, while the median forecast has ranged from -0.4 to 1.6. The quarterly CPI outcome averaged 0.6, excluding the 2000 tax-affected quarter.
Running an eye over the table shows a few things:
* The market is more conservative than the data – it tends to underestimate the highs and misses the lows as well.
* The market mostly gets the direction right – but not always. Of the 69 observations the market clearly missed the direction four times. It called the direction flat, when it wasn’t, another six times.
The figures show the market got it exactly right just 11 times, and hasn’t got it right since Q4 2003. Being a little more generous and accepting a 0.1 percentage point variation around the actual outcome, the market has been in the ballpark 36 times, or some 52 percent. And clearly wrong, say 0.3 percentage points out or more, that’s 18 times or about 26 percent.
For the statistically minded, there are quite a few academic papers on forecasting accuracy. The most popular statistics seem to be the mean absolute error (the average of the forecasters’ errors without regard to arithmetic sign) and the root mean square (calculated by first squaring the errors, then taking the square root of the arithmetic average of the squared errors), which gives greater weight to larger errors.
The mean absolute error is 0.18. The root mean square error is 0.24.
One other graph, this one showing how the outcome compared with high and low forecasts from the economists surveyed going back to Q2 2003 – some 23 observations. A look at the graph shows the field got it right 20 times. But on 3 occasions the result was outside the field, about 13 percent. Given these forecasts are made the week before the data, it shows just how difficult it is to make long-term forecasts!