John Kemp's Profile
Don’t shoot the statisticians
Economists and media commentators have lined up to question the accuracy of estimates published by Britain’s Office for National Statistics (ONS), showing gross domestic product shrank 0.2 percent in the first quarter, tipping the economy back into recession.
“The figure is disappointing and paints an unduly pessimistic picture of the state of the economy,” according to David Kern, chief economist at the British Chambers of Commerce. “Many commentators will question the accuracy of the data . Business surveys have shown a more positive picture, and we believe these give a more accurate indication of the underlying trends,” Kern said.
“We think it is likely that the preliminary estimate will be revised upwards when more information is available. For the time being, the main priority is to minimise any possible damage to business confidence. These figures are at odds with the experiences of many UK businesses, which continue to operate with guarded optimism.”
Kern’s comments are typical of criticism being leveled at the statistics authority. But much of it is unwarranted, and reveals a poor understanding of how official statistics should be interpreted.
More generally, ONS is being criticized not for the accuracy of its data, but because its gloomy numbers threaten to extinguish the optimism which bank economists and business lobbyists have been struggling to drum up in the hope it will unlock more investment, consumer spending and rising asset values, to pull the economy out of its long slump.
MODERN CULT OF STATISTICS
Modern societies have made a fetish of official statistics, particularly the national income and production accounting (NIPA) system developed by Nobel Laureates Simon Kuznets and Richard Stone during the 1930s and 1940s.
NIPAs, especially the top-line figure for gross domestic product (GDP), as well as monthly employment data such as U.S. nonfarm payrolls, have become the arbiters of economic policy and the success and failure of politicians.
In a strange way, Britain’s ONS, and similar agencies like the Bureau of Labor Statistics (BLS) and Bureau of Economic Analysis (BEA) in the United States, hold the fate of politicians in their hands because they help write the political narrative.
It is only a slight exaggeration to say BEA is one of the most powerful agencies in the U.S. government. It may not have as many tanks as the Pentagon, but by measuring the success and failure of economic policies, it can make and break presidencies, as President George H W Bush could confirm and Barack Obama fears.
ALL STATISTICS ARE ESTIMATES
Yet most statistics users (businesses, economists, politicians, voters) neither know nor care how they are put together. If there is an image of how statistical agencies work, it is of armies of faceless bureaucrats carefully counting things, rather like an audit of widgets in a warehouse. The reality is more complicated.
If statistics agencies were to count every car, computer and hospital operation they would have to be as large as the economy itself. Britain’s Statistical Authority employed the equivalent of just 2,995 full time staff at the end of February 2012, plus 250 contractors, to produce a wide range of stats, not just on the economy. In the United States, BEA has around 600 staff, of whom less than 200 work on the national economic accounts.
So all statistics agencies rely on surveys and sampling. The approach introduces both sampling and non-sampling errors.
Some degree of sampling error is unavoidable because the sample will not precisely match the characteristics of the whole population, but agencies take great care to ensure samples are as representative as possible to avoid systematic bias. Non-sampling errors include human error, data entry and sample design.
In the United Kingdom, ONS GDP data are based on a very large sample of firms, including 8,000 in the construction sector, which is a far larger and more sample than the surveys conducted by the Chamber of Commerce and other business lobby organisations.
In the pressure to produce timely information, all statistical agencies tend to produce “preliminary” or “advance” numbers based on a smaller sample, which are updated several times as more responses become available, and as agencies are able to cross-check their estimates with other numbers such as tax returns.
Few go as far as China’s National Bureau of Statistics (NBS), which used to release annual GDP estimates on the first day of the new year — to disbelief in the rest of the world and apparently among the country’s own leaders. But all agencies are under pressure to provide a timely “peak” at how the economy is performing, and revisions from the first estimate to the second and subsequent estimates are inevitable.
There is a trade off between timeliness and accuracy. What is vital is that revisions are not systematically biased in one direction or another. In the United Kingdom, first estimates for GDP (which in this case were -0.2 percent) are typically revised by +/- 0.2 percentage points, but it is as likely the figure will be revised downwards as up.
SURVEYS AND FEEDBACK BIAS
Are business and consumer confidence surveys, such as those conducted by the U.S. Institute of Supply Management (ISM) and the University of Michigan, or the British Chambers, more accurate?
Some policymakers prefer them because the results are available more quickly, so they may be of more use to making decisions in real time. But surveys have their own problems, the biggest of which is that survey respondents may supply answers they think the questioner wants to hear, or based on what other people are saying.
There is clear evidence in consumer confidence surveys that respondents give answers based not just on their own experience of the economy but on what they have read in the newspapers and seen on television or talked about with friends, creating a feedback loop.
Evidence for the same effect in business surveys is less clear cut. But it is much easier to admit your own business is struggling when everyone else is saying the same thing. Kern’s real criticism of the ONS GDP numbers seems to have been not just that they were wrong (which is debatable) but that they might “damage business confidence” which are now operating “with guarded optimism”.
Official statistics, private sector surveys, media commentators, businesses and the public all operate in a complex eco-system in which reality, measurements of reality and perceptions interact.
Just because these GDP estimates were unwelcome, does not mean ONS is making a systematic error. Critics have seized on the surprisingly large drop in construction activity (down 3.0 percent compared with the previous quarter). But construction accounts for just 8 percent of all output. Manufacturing output was also down 0.4 percent, and the service sector barely grew 0.1 percent.
It is not clear what ONS’ critics would have liked the agency to do. If the construction sample, which is presumably the same as used in previous quarters, showed output down 3.0 percent, the agency cannot simply over-write the figure with a more congenial one.
In any event, normal revisions to the data, or even an error in calculating the performance of the building industry, will not change the overall picture of an economy flat-lining, and likely to continue zig-zagging for the next few quarters, according to the governor of the Bank of England.
The most important lesson is to treat all economic statistics with appropriate skepticism. Statistics are always subject to some uncertainty, which is why ONS and BEA label them “preliminary estimates”. It is not possible to measure growth to one decimal place — which is why announcements that analysts at XYZ bank have cut their GDP forecast by 0.1 or even 0.2 percent should draw a wry smile.
It is time to have a more grown up debate about what statistics actually mean and how they should be used, rather than criticize the statisticians who produce them. On balance they do a good job with few resources.