The curious predictability of the payrolls report

By Felix Salmon
January 4, 2013

I have very little time for conspiracy theories when it comes to the monthly payrolls report. But there is something odd going on right now, as Justin Wolfers noted this morning:

Here’s the chart. What I’m showing here is not the total number of people on US payrolls each month. And I’m not even showing the change in the total number of people on US payrolls each month. Instead, I’m showing the second derivative: the change in the change in the total number of people on US payrolls each month.

Every month, economists and traders look first at the headline jobs growth number: this month, it’s 155,000. Last month, it was 161,000. Which means the difference is just 6,000. That difference, from month to month, is what I’m charting here. Sometimes the difference is positive and sometimes it’s negative, but if you just look at the magnitude of the change, then over the past two years, the typical number has differed from the previous number by an average of 51,000, with a median of 30,000. And over the past five months, the changes have been much smaller still.

It’s entirely reasonable to look at these numbers and conclude that the labor-market recovery is “steady-as-she-goes”. Each month, we get another 160,000 new jobs, month in, month out, with some very modest month-to-month variation in the number.

But here’s the problem. Let’s say that the US economy was adding exactly 160,000 new jobs every month, with no variation at all in that number. In that case, what would we expect the monthly payrolls report to show? It would not show an exact 160,000 print every month, because there are errors in the data, as the BLS technical note does a very good job of explaining.

The errors fall into two buckets: “sampling errors” and “nonsampling errors”. The sampling errors are basically just statistical noise: because the BLS doesn’t survey every employer in the country, it has to extrapolate from a representative sample. But no sample is perfectly representative. As a result, the BLS says, the payrolls number will be off by more than 100,000 employees 10% of the time — more than once a year, on average. (The BLS puts it in stats-speak: “the 90% confidence interval for the monthly change in total nonfarm employment from the establishment survey is on the order of plus or minus 100,000.”)

Then, on top of the sampling errors come the nonsampling errors. Some survey returns are incomplete, for instance; more importantly, the BLS doesn’t have a firm grip on how many new firms were created in any given month, and how many closed.

Finally, there are various adjustments that the BLS makes to the numbers before they’re published — adjustments designed to make the numbers better, but which in theory could make them worse. The most notorious is the birth-death adjustment, designed to model those new firms being created and old ones dying; Barry Ritholtz, for one, has been a consistent critic of this adjustment for many years now. And then there are the seasonal adjustments (everybody looks at the seasonally-adjusted figures in this series). Those adjustments, shrouded in mystery, change all the time:

For both the household and establishment surveys, a concurrent seasonal adjustment methodology is used in which new seasonal factors are calculated each month using all relevant data, up to and including the data for the current month. In the household survey, new seasonal factors are used to adjust only the current month’s data. In the establishment survey, however, new seasonal factors are used each month to adjust the three most recent monthly estimates. The prior 2 months are routinely revised to incorporate additional sample reports and recalculated seasonal adjustment factors.

Now it’s important to note that the seasonal adjustments are not designed to compensate for either sampling or non-sampling errors. But there is something suspicious about how consistent the data series is — it’s the same kind of suspicious consistency that first tipped Harry Markopolos off to Bernie Madoff. The sampling errors alone should make the payrolls data series significantly more volatile than we’ve been seeing of late, and when you layer on non-sampling errors, the volatility should be even bigger.

I’ve written about this before. In June 2008, for instance, I blogged:

I’m still not convinced there isn’t something very weird going on here. The series just doesn’t seem to behave like one where the margin of error is 104,000. Is there some kind of massaging going on at the BLS before the data is released? Is the margin of error being overestimated?

These are reasonably important questions, because volatility in the monthly payrolls report can cause enormous market swings. Partly because the number is normally so predictable. Why is it so predictable, when all the rules of sampling and statistics say that it shouldn’t be? One possibility lies in those seasonal adjustments: maybe the BLS number-crunchers somehow end up adjusting not only for seasonal variations but also for non-seasonal sampling errors. Or maybe it’s something else entirely. But either way, I’d be weirdly happier if the payrolls number were a lot more volatile than it is.

5 comments

Comments are closed.