Slices of Japanese business, politics and life
Financial journalists spend a lot of time surveying market economists ahead of macro-economic data releases to find out how they think the next CPI or GDP number is going to turn out. A poll 20 or 30 economists gives a market median forecast, which will determine how traders react when the data comes out. If the figure beats expectations and points to a strong economy and likely rate rises, the currency will jump, and vice versa.
But how good are these forecasts? Why react if there's no track record for accuracy? Economists have a pretty good feel for how reliable forecasts are for different indicators, but it would easier to have a number that tells us how reliable forecasts are for data such as GDP, jobs data or the CPI?
Forecast accuracy is a live topic in academic journals. There’s the MAE and the MSE, the sMAPE and the MAD/Mean ratio among others. Some measures depend on scale so they can’t be used to compare different series of data, such as GDP and the jobless rate. Using percentage error -- the MAPE -- can overcome this but it gives whacky results with outcomes of zero or near zero. One possible solution is to use the mean absolute scaled error – or MASE – suggested by Professor Rob Hyndman at Australia’s Monash University and colleague Anne Koehler from Miami University, Ohio in 2006.
The MASE measures how forecasters have performed against a so-called naïve forecast -- simply forecasting that next month’s result will be the same as last month’s. The lower the result, the better the forecast. So 0 is a perfect forecast, while a score above 1 means the forecast is worse than a naïve forecast.