The problems with measuring traffic congestion

By Felix Salmon
October 17, 2012
a cautious welcome to TomTom's congestion indices.

" data-share-img="" data-share="twitter,facebook,linkedin,reddit,google" data-share-count="true">

Back in July, I gave a cautious welcome to TomTom’s congestion indices. The amount of congestion in any given city at any given time does have a certain randomness to it, but more data, and more public data, is always a good thing.

Or so I thought. I never did end up having the conversation with TomTom that I expected back in July, but I did finally speak to TomTom’s Nick Cohn last week, after they released their data for the second quarter of 2012.

In the first quarter, Edmonton saw a surprisingly large drop in congestion; in the second quarter it was New York which saw a surprisingly large rise in congestion. During the evening peak, the New York congestion index was 41% in the first quarter; that rose to 54% in the second quarter, helping the overall New York index rise from 17% to 25%. (The percentages are meant to give an indication of how much longer a journey will take, compared to the same journey in free-flowing traffic.) As a result, New York is now in 8th place on the league table of the most congested North American cities; it was only in 15th place last quarter, out of 26 cities overall.

So what’s going on here? A congestion index like this one serves two purposes. The first is to compare a city to itself, over time: is congestion getting better, or is it getting worse? The second is to compare cities to each other: is congestion worse in Washington than it is in Boston?

And it turns out that this congestion index, at least, is pretty useless on both fronts. First of all there are measurement issues, of course. Cohn explained that when putting together the index, TomTom only looks at road segments where they have a large sample size of traffic speeds — big enough to give “statistically sound results”. And later on a spokeswoman explained that TomTom’s speed measurements turn out to validate quite nicely with other speed measures, from things like induction loop systems.

But measuring speed on individual road segments is only the first step in measuring congestion. The next step is weighting the different road segments, giving most weight to the most-travelled bits of road. And that’s where TomTom data is much less reliable. After all, on any given stretch of road, cars generally travel at pretty much the same speed. You can take a relatively small sample of all cars, and get a very accurate number for what speeds are in that place. But if you want to work out where a city’s drivers drive the most and drive the least, then you need a much larger and much more representative sample.

And this is where TomTom faces its first problem: its sample is far from representative. Most of it comes from people who have installed TomTom navigation devices in their cars, and there’s no reason to believe those people drive in the same way that a city’s drivers as a whole do. Worse, most of the time TomTom only gets data when the devices are turned on and being used. Which means that if you have a standard school run, say, and occasionally have to make a long journey to the other side of town, then there’s a good chance that TomTom will ignore all your school runs and think that most of your driving is those long journeys. (TomTom is trying to encourage people to have their devices on all the time they drive, but I don’t think it’s had much success on that front.)

In general, TomTom is always going to get data weighted heavily towards people who don’t know where they’re going — out-of-towners, or drivers headed to unfamiliar destinations. That’s in stark contrast to the majority of city traffic, which is people who know exactly where they’re going, and what the best ways of getting there are. There might in theory be better routes for those people, and TomTom might even be able to identify those routes. But for the time being, I don’t think we can really trust TomTom to know where a city as a whole is driving the most.

I asked Cohn about the kind of large intra-city moves that we’ve seen in cities like Edmonton and New York. Did they reflect genuine changes in congestion, I asked, or were they just the natural variation that one sees in many datasets? Specifically, when TomTom comes out with a specific-sounding number like 25% for New York’s congestion rate, how accurate is that number? What are the error bars on it?

Cohn promised me that he’d get back to me on that, and today I got an email, saying that “unfortunately, we cannot provide you with a specific number”:

The Congestion Index is calculated at the road segment level, using the TomTom GPS speed measurements available for each road segment within each given time frame. As the sample size varies by road segment, time period and geography, it would be impossible to calculate overarching confidence levels for the Congestion Index as a whole.

It seems to me that if you don’t know what your confidence levels are, your index is pretty much useless. All of the cities on the list are in a pretty narrow range: the worst congestion is in Los Angeles, on 34%, while the least is in Phoenix, on 12%. If the error bars on those numbers were, say, plus-or-minus 10 percentage points, then the whole list becomes largely meaningless.

And trying to compare congestion between cities is even more pointless than trying to measure changes in congestion within a single city, over time. As JCortright noted in my comments in July, measuring congestion on a percentage basis tends to make smaller, denser cities seem worse than they actually are. If you have a 45-minute commute in Atlanta, for instance, as measured on a congestion-free basis, and you’re stuck in traffic for an extra half an hour, then that’s 67% congestion. Whereas if you’re stuck in traffic for 15 minutes on a drive that would take you 15 minutes without traffic, that’s 100% congestion.

Cohn told me that TomTom has no measure of average trip length, so he can’t adjust for that effect. And even he admitted that “comparing Istanbul to Stuttgart is a little strange”, even though that’s exactly what TomTom does, in its European league table. (Istanbul, apparently, has congestion of 57%, with an evening peak of 125%, while Stuttgart has congestion of 33%, with an evening peak of 70%.)

All of which says to me that the whole idea of congestion charging has a very big problem at its core. There’s no point in implementing a congestion charge unless you think it’s going to do some good — unless, that is, you think that it’s going to decrease congestion. But measuring congestion turns out to be incredibly difficult — and it’s far from clear that anybody can actually do it in a way that random natural fluctuations and errors won’t dwarf the real-world effects of a charge.

When London increases its congestion charge, then, or when New York pedestrianizes Broadway in Times Square, or when any city does anything with the stated aim of helping traffic flow, don’t be disappointed if the city can’t come out and say with specificity whether the plan worked or not. Congestion is a tough animal to pin down and measure, and while it’s possible to be reasonably accurate if you’re just looking at a single intersection or stretch of road, it’s basically impossible to be accurate — or even particularly useful — if you’re looking at a city as a whole.

5 comments

Comments are closed.