Opinion

Felix Salmon

When we can’t see the world for our phones

Felix Salmon
Dec 5, 2012 16:59 UTC

Mobile devices are wonderful things, just as cars are. But both can cost lives, as Robert Kolker explains in a great NY Mag article:

Cars still speed, drivers still drink, and jaywalkers still pay no attention, especially with smartphones to distract them. “I wonder if we’ve reached a critical mass where so many people are looking down and so many people are listening to headphones and so many drivers are texting that the probability of an inattentive walker and an inattentive driver is much greater,” says Sam Schwartz, a.k.a. Gridlock Sam, the transportation consultant and traffic guru.

Schwartz is speaking purely anecdotally, of course: there are no reliable statistics on such matters, and I doubt there ever will be. But it’s undeniable that when people are staring at a mobile phone, they become much more oblivious — and much more dangerous.

As smartphones make our streets increasingly dangerous, it’s incumbent upon local government to try to mitigate things as much as possible. A world where everybody’s looking at a screen cannot be a world where cars can careen around the city at 40mph without any risk of getting a speeding ticket. Fast cars are lethal things, and have no place anyway in a dense city where the interplay of automobiles, bicycles, and pedestrians constitutes a highly-complex dance. So we should slow them down, using all manner of traffic-calming measures. (My favorite, which seems to be used far too sparingly, is to put louvers on traffic lights so that cars can’t see the color of the light at the end of the block. When they can, they often speed up to make it through.)

But the distraction of smartphones isn’t just physically dangerous: in reducing our awareness of our surroundings, it has broader corrosive effects. Alastair Bland has a good post on the way that GPS devices mean that we stop needing to pay attention or to learn the kind of things that maps can teach us:

A study conducted in Tokyo found that pedestrians exploring a city with the help of a GPS device took longer to get places, made more errors, stopped more frequently and walked farther than those relying on paper maps. And in England, map sales dropped by 25 percent for at least one major printer between 2005 and 2011. Correlation doesn’t prove causation—but it’s interesting to note that the number of wilderness rescues increased by more than 50 percent over the same time period. This could be partly because paper maps offer those who use them a grasp of geography and an understanding of their environment that most electronic devices don’t. In 2008, the president of the British Cartographic Society, Mary Spence, warned that travelers—especially drivers—reliant on electronic navigation gadgets were focusing mainly on reaching a destination without understanding quite how they got there.

I love maps, be they old or new, paper-based or digital. And if I’m on a hike, or even just giving directions in a car, I feel lost without one. (The main thing I hate the most about Apple’s iOS maps compared to Google’s is that if you use them for directions, they won’t let you zoom out, examine your route, look where you’re going next, etc. Apple’s maps infantilize: they basically say “we know where you are, we’ll tell you what to do, don’t worry your pretty little head about it.” While Google’s encourage you to explore your surroundings much more, look at alternative routes, and generally be aware of where you are.)

Sadly, I’m in a tiny minority here. Most people have no love for maps at all, and positively dislike map-reading, avoiding it as much as possible and finding it quite difficult. For decades they were a regrettable necessity — but now that we don’t need them any more, we’re jettisoning them as fast as we can, and relying instead on passively receiving turn-by-turn directions from some algorithm. Those algorithms are undoubtedly helpful, but insofar as they allow us to stop paying attention to our surroundings, they also hasten our evolution into people whose entire experience of the world is intermediated by some kind of digital device. Which cannot possibly be healthy.

I’m a huge fan of Google’s driverless car, and I can’t wait for its broad adoption: it promises to reduce accidents and fatalities enormously. But it also promises to reduce our real-world horizons even further. I’m reminded of this anecdote from Chrystia Freeland:

The wife of one of America’s most successful hedge-fund managers offered me the small but telling observation that her husband is better able to navigate the streets of Davos than those of his native Manhattan. When he’s at home, she explained, he is ferried around town by a car and driver; the snowy Swiss hamlet, which is too small and awkward for limos, is the only place where he actually walks.

Our phones are cutting us off from the world: they’re turning us into mini versions of Eric Packer, the cloistered billionaire in Don DeLillo’s Cosmopolis (and played by Robert Pattinson in the movie). It’s unhealthy, and yet it’s also inevitable, and of course it carries much more upside than downside. I do wonder, however, what the logical conclusion is, and how the world will change as a result.

COMMENT

I think he’s complaining that GPS might get you where you are going, but you won’t have any sense of where you are and what else is in the area. For some people, this is not a problem. If you are destination oriented and don’t plan on exploring the area, GPS directions are great, but it is like using a calculator. We’ve been tutoring high school kids in math, and they could barely add, let alone multiply. That made things like factoring, solving equations and recognizing solutions hard for them. We tell them to put down their calculators. At first they struggle, but presently they start learning something about numbers.

My own problem with GPS directions, except for walking and mass transit, is that the directions are usually so crappy. Google almost always goes around the long way, but sometimes takes shortcuts through people’s houses and suggests using hiking trails, some heavily overgrown, as roads. Worse, they don’t give me directions early enough to start changing lanes or watching for landmarks. Suddenly saying “take the next right” isn’t always useful or even safe.

Posted by Kaleberg | Report as abusive

How not to rank US cities

Felix Salmon
Nov 29, 2012 20:29 UTC

More than half the world now lives in cities, and nearly all the growth and value creation in the world comes from what Richard Florida calls “megaregions” centered on large conurbations. So it’s really useful to see which cities are doing well, and which are not.

Sadly, that’s not what we’re getting. Instead, we get things like “The 20 Richest Metros in America“, from The Atlantic, based on the metric of personal income per capita, or — and this is much worse — CNBC’s First Annual Recovery Road Trip, which spent much of this week counting down the top three cities in America, based on a highly convoluted and opaque methodology based loosely on stock-price appreciation.

The results of these exercises are always useless. Here’s a list of random cities! (This is the Atlantic’s list.)

  1. Bridgeport, CT
  2. Midland, TX
  3. San Francisco, CA

Here’s another list of random cities! (This is CNBC’s list.)

  1. Atlanta, GA
  2. Charlotte, NC
  3. Philadelphia, PA

Here, by contrast, is the actual league table for US cities:

  1. New York
  2. Los Angeles
  3. Chicago
  4. Washington
  5. Houston
  6. Dallas
  7. Philadelphia
  8. San Francisco
  9. Boston
  10. Atlanta
  11. Miami
  12. Seattle

As you’d expect, this league table changes only very slowly, but it does change: Philadelphia fell behind San Francisco, for instance, in 2008, but then regained its seventh-place position in 2009. And lower down the table there’s more movement: Pittsburgh has risen from 25th place to 22nd since 2007, for instance.

The problem is that because this league table doesn’t change very much, there’s not much news value in reporting it, and instead reporters are drawn to gimmicks. Similarly, if you wanted to use public-company stock-market capitalization as a metric, then the obvious thing to do would simply be to list the cities with the largest market caps. My guess is that it would look very similar to the metropolitan GDP league table, and therefore, again, not be all that newsworthy. So instead, CNBC decided to go with the amount that stock prices have fluctuated, a methodology akin to simply throwing darts at a list of cities.

If you want to do valuable reporting on the cities which are getting things right, then the thing to do is to look at the growth rates of America’s biggest metropolitan areas, see which areas are consistently outperforming, and ask why. It’s much less gimmicky. But it’s much more useful.

COMMENT

“If you want to do valuable reporting on the cities which are getting things right, then the thing to do is to look at the growth rates of America’s biggest metropolitan areas, see which areas are consistently outperforming, and ask why.”

How is this more valuable Mr. Salmon? Growth rates by what measure? Population growth? jobs? Income? Immigrants from Asia? ALSO, valuable for whom? Immigrants? Middle-class conservatives? Upper-class liberals? Lower-income midwesterners? Conservative southerners? Humor me.

Posted by AJMango | Report as abusive

Why London is doomed to remain a financial capital

Felix Salmon
Nov 12, 2012 20:50 UTC

It’s amazing how much coverage a thinly-sourced press release can elicit:

capital.tiff

If you look at the PDF with the numbers in it, there’s no indication at all of where the numbers being cited come from, or what exactly they’re measuring. The idea, here, is that we’re trying to measure “jobs in the wholesale financial service sector”, which will include some but not most lawyers and accountants, if that helps.

In any case, Ben Walsh helpfully turned the press release into a chart:

center.png

This of course says basically nothing about which city if any is the financial capital of the world. If there were more wholesale finance jobs in Tampa than there are in London, that wouldn’t make Tampa an international financial capital.

What’s more, we can basically ignore the forecasts and extrapolations — everything in light grey. You think Hong Kong is going to add 70,000 new wholesale finance jobs in the next five years? Well, it might, I suppose, depending on a multitude of factors including what happens to money and banking regulations in mainland China.

The one thing that’s clear from this chart is that the 2008 financial crisis hit the US hardest, in terms of financial job cuts, while the European crisis which began a couple of years later was the point at which European jobs started getting shed. To be honest I’m a little surprised at how few jobs were lost in London during the 2008-9 crisis, but that might be a function of the way in which the Europeans tended to support their banks rather than encouraging them to consolidate, merge, and generally shrink.

In any case, London will be the capital of international finance for many, many years to come, regardless of how many jobs are shown on this chart. This is not necessarily a good thing. The press release features dark language on the subject of “onerous regulation and taxation in the UK”, as though we still live in a world where jurisdictions are well advised to compete on how laissez-faire their economic and regulatory policies are. (We know, now, how that works out.) If London remains a financial center — and it will — then that just means it will be much riskier than is appropriate for the capital city of a decent-sized European nation, and that Londoners will continue to suffer from having to live in an obscenely expensive city.

In a way, I wish that the scaremongering in this report were true: it’s about time that London’s center of gravity moved away from the ultra-rich. In order to do that, however, the city (and the City) would have to be much less of a magnet for international capital. And I don’t see that happening. Such magnets are cultural, and based in longstanding institutions; once the plutocrats have decided to put their roots down in London, it’s going to take a lot before they leave.

And really it’s the plutocrats which matter, along with a handful of large money-center banks. London could be an international financial capital even if it had only 50,000 people working in wholesale finance, rather than 250,000. When you’re dealing with trillions of dollars of capital flows, it’s never headcount which matters. Follow the actual money, not the paychecks.

COMMENT

“Europeans tended to support their banks…”

It is kind of silly to both complain about the ultra-rich and then boast about how you keep bailing them out whenever they have a bit of financial trouble.

Posted by NickDanger3 | Report as abusive

Should gas prices be soaring?

Felix Salmon
Nov 1, 2012 17:54 UTC

Traffic is flowing in New York again this morning, for four reasons. First is the ban on private cars entering the island if they’re carrying fewer than three people. Second is the subways, which have started working again, in a limited manner. Third is a noticeable increase in bicyclists, even between yesterday and today: I have real hope that Sandy might persuade a whole swath of new people that bike commuting is incredibly fast and easy. And then there’s a much more mundane fourth reason: people are running out of gas.

With much critical infrastructure still out, many gas stations don’t have electricity to pump gas, and most of the rest — at least in New York and New Jersey — have sold out, as people filled up not only their cars but any other vessels they could find. Gasoline is precious right now: it powers generators which pump out flooded buildings, as well as powering the one form of transportation which is capable of getting stuff from Brooklyn or New Jersey into Manhattan. As for when new supplies might arrive, that’s extremely vague, but the consensus seems to be Saturday.

There’s something self-fulfilling about gas shortages: they’re the crisis equivalent of a bank run. So long as everybody just goes about their day in a normal manner, refilling their tank only when they get low, everything goes smoothly. But when people start thinking that there might not be enough to go around, everybody panics and rushes to the stations: while shortages in New Jersey have real Sandy-related causes, shortages in places like Westchester are essentially the product of self-fulfilling fears.

The standard econowonk response to such things (see e.g. Yglesias, or for that matter Uber, which has reverted to “surge pricing”) is that if the market were just left to its own devices, none of this would happen. Prices fluctuate in response to changes in supply and demand, so when supply goes down and demand goes up, it’s only natural that they should rise to the point at which demand starts falling off and the two finally meet again. At that point, the gas stations will still sell most of their gas, but there won’t be massive lines at the pump, and there will always be gas — at a price — for those who really need it.

It doesn’t help matters that such arguments tend to come from the kind of people who can afford to pay extra for their essentials. Crises always hit the poor worse than the rich, even when prices don’t rise: in my own NYC neighborhood, for instance, there is tragic human suffering right now, even as I have a warm and comfy office to go to, a group of rich friends with spare rooms, and enough money to pay for a hotel or Airbnb room should I need it. If the Lower East Side’s handful of open bodegas started price-gouging just because they had a captive audience, that would only exacerbate the situation — quite aside from the fact that it would also open them up to the very real risk of grievous bodily harm.

At times like this, the charitable impulse, of helping out where and when you can, is very strong: hence the articles with headlines like “Donate to Hurricane Sandy Victims with a Simple Text Message”. Even as gas stations are running dry, individuals across the northeast are siphoning out gas from their cars’ tanks to provide fuel for people who need it more than they do. They’re not charging a market-clearing price for doing so: in fact, they’re not charging any money at all. Multiplied thousands of times, such small individual acts of charity make a huge difference.

Meanwhile, during a crisis, the opportunity costs involved in sitting in a long line for gas actually fall substantially. Yglesias is right that price-gouging would reduce those costs, but they are the least of the damage that a storm like this causes. Much more important is the feeling that your neighbors are rallying together at a very hard time. If we all run out of gas, we’ll all run out of gas. But we won’t try to profiteer from that, and we’ll try to use it in as effective a way as possible, rather than just letting it get acquired by whomever happens to be most price-insensitive.

That’s why there’s something a bit distasteful about Uber’s insistence that the only way they can provide a good service these days is by charging more money. The cost of gas has not gone up: either their drivers can find the gas to drive people around or they can’t. The drivers, for their part, already make good money from Uber, whose prices are high; doubling those prices seems excessive, especially when there’s a strong impulse at times like these to help people out without charging any money at all.

I’m not taking a position here on price-gouging laws: while I don’t like the practice, I’m also not convinced that it should be made illegal. But the fact is that the benefits of price-gouging tend to accrue to a handful of merchants and the price-insensitive rich, while the costs are borne by those who can least afford it. ‘Twas always thus, or course, but during a crisis, especially, it’s a good idea to try to minimize such mechanisms, rather than trying to encourage them.

COMMENT

@ceanf has a good point. While paying a couple extra dollars per gallon for a couple weeks won’t kill anybody’s budget (might cost them $50 or $100?), it would be a HUGE incentive to ship in supplies. You would have gasoline trucks lining up on the highways for the chance to dump their contents at that kind of a markup (in what is normally a slim-margin distribution business).

Both supply and demand are potentially elastic, and in the absence of a real shortage a modest shift in price ought to rapidly cure any imbalance.

Posted by TFF | Report as abusive

How resilient is New York City?

Felix Salmon
Oct 31, 2012 21:52 UTC

What a difference a day makes: yesterday, the streets of hurricane-devastated New York were largely empty; today, the electrified parts of the city are in a massive state of gridlock. It’s just as well the threatened Obama visit isn’t happening: traffic in Manhattan and most of Brooklyn is bad enough without it.

New York began as a small town based at the Battery, and slowly expanded northwards towards Wall Street (where the city wall was originally built) and beyond up to City Hall. It then expanded from there to the megalopolis it is today — but the heart of the city has always been, and will always be, downtown, where the East River and the Hudson River meet New York Harbor.

And right now, that heart — downtown New York — is a black hole. No electricity, no cell service, no heat in most buildings, no subway service, not even after it is restarted on a limited basis tomorrow. The temporary subway map is stark: everything just comes to a sudden halt at 34th Street, and at Borough Hall/MetroTech in Brooklyn. No electricity means no subways, and also no traffic lights.

In many ways, one of the most heartening lessons of Sandy was the utter lack of chaos in downtown Manhattan after the power went out. When the sun rose on Tuesday morning, cars, bikes, pedestrians, and emergency vehicles navigated the grid of streets efficiently and without rancor, and the amount of time it took to drive across town was if anything lower, with all the businesses shut, than it would normally be with all the traffic lights working. Public-spirited acts were everywhere: volunteers taking it upon themselves to direct traffic at major intersections; people bringing down power strips to the few outlets with generator-powered electricity; restaurants serving up their food for free to the local population; coffee shops jerry-rigging propane-based systems to give the people what they really need.

It turns out, as students of the Dutch woonerf system could tell you, that when drivers are forced to self-govern, rather than simply following the orders of speed limits and traffic lights, the system generally works extremely well, at least until traffic reaches a certain density. Similarly, the speed-limits-and-traffic-lights system also tends to work pretty well, until it doesn’t.

The idea is to maximize the number of person-miles per hour, especially during the morning and evening peaks. And there’s a real science to this: up until a certain point, if you add an extra car to the system, that increases person-miles per hour, just because that car contains people who are traveling a certain number of miles. But past that point, adding extra cars doesn’t help; instead, it hurts. You know how on a freeway traffic can be flowing smoothly and then suddenly grind to a halt for no particular reason? At that point, clearly, the capacity of the freeway to generate person-miles per hour plunges. And city traffic works the same way. In extremis, once you reach gridlock, no one is moving anywhere. And a world where people take half an hour to travel five blocks is clearly a world where they all would have been much better off just walking.

This morning, and this evening too, New York — electrified New York, that is, above 39th Street — suffered some of the worst gridlock it has ever seen; in a press conference, mayor Michael Bloomberg said that “the streets just cannot handle the number of cars that are trying to come in”. This is the context in which cab company Uber is boasting that it is “doing our best to figure out ways to get more cars on the road.” They’re even losing money by doing so — and exacerbating congestion at the same time. Under the Uber model, a lone driver will drive an often-substantial distance to pick up what is usually a lone passenger, will then drop off that passenger, and repeat the procedure over the course of the day. The car rarely has more than two people in it, often only has one, but is driving around the city and contributing to congestion on a continuous basis. (In contrast to private cars, which at least have the decency to park themselves out of the way when their job is done.)

The city of New York has a much better idea when it comes to cabs. Yellow taxis are being encouraged to get passengers to share rides, while black cars are allowed to pick up street hails (and can also pick up additional passengers). On top of that, the mayor announced today, there is going to be a new rule in effect tomorrow: if you’re driving into Manhattan after 6am using any of the major bridges or tunnels, you’re going to need to have at least three people in your car. Otherwise, you won’t be allowed through.

There’s a problem with this policy, which is that people are going to find it relatively easy to “slug” their way into Manhattan, getting rides with drivers who otherwise wouldn’t be allowed in, only to be stuck when those same drivers happily leave the island without them. But the principle is a good one: there’s no way that private cars can possibly transport everybody who needs to come into Manhattan, and as a matter of public policy, everybody gains if the number of private cars in the city is reduced. This is not the first-best policy, but it is the easiest to implement immediately, and in a situation like we have right now, you can’t have the perfect be the enemy of the good.

If the East River subway tunnels remain closed for more than a few days, however, this stopgap approach is not going to work. Millions of people commute from Brooklyn to Manhattan every day, and the only way they can be accommodated is with those subway tunnels. Every day those subway tunnels are out of operation is a day that New York City is essentially not functioning. And by one estimate, it could take weeks or even months for those saltwater-flooded subway tunnels to reopen.

Which means that the bigger-picture lesson of Sandy is the importance of investment in infrastructure. Our electrical utility, unable to find $250 million to spend on things like submersible switches and moving transformers above ground, is making adjustments only gradually — with the results we saw on Monday night. And $250 million is small beer compared to the kind of money it would take to protect New York Harbor from hurricanes, and to protect those subway tunnels from Sandy-level storm surges. Still, $10 billion or even $50 billion spent up-front would not only be a large economic stimulus for New York, but would more than pay for itself if and when global warming means that more hurricanes hit this area.

Where would that sort of money come from? Cate Long has one suggestion:

The most obvious source of funding for these projects would be for the Federal Reserve to purchase public infrastructure bonds instead of the $40 billion a month of mortgage-backed securities it has been buying. The housing market is important, and keeping mortgage rates low is useful, but investing in public infrastructure is much more important for the nation now.

It’s not a bad idea: the Fed would do more long-term good for the country by buying infrastructure bonds than it would buying mortgage-backed securities. But there are problems with it, too: once the Fed stepped in, the chances are that no one else would lend, and private financing of public infrastructure would actually go down rather than up.* Maybe some kind of Treasury guarantee would be better, especially since these projects are fundamentally fiscal, rather than monetary, in nature.

In any case, there’s a clear public interest when it comes to investing in and coordinating our urban infrastructure. America’s cities, including New York, have been suffering from underinvestment for decades. Hurricanes happen sometimes; traffic jams happen every day. And some smart public expenditure could help minimize the damage that both of them cause.

*Update: Some confusion about what I was saying here. The point is that if the Fed started buying infrastructure bonds, that in no way would reduce the creditworthiness of those bonds. The price would rise and the yield would fall, as always happens when a large new buyer enters the market — but at that point the yield would be lower than the yield required by the market to make up for the credit risk in the bonds. So private-sector buy-and-hold investors would no longer buy them, the Fed would to a first approximation be the only real-money buyer. As a result, the flow of private money into the infrastructure sector would go down.

COMMENT

In theory, Fed purchase of bonds should reduce private-sector interest in the bonds. But that reduction is by less than the amount the Fed is adding to the markets.

In practice, I wonder if this still applies? There is a ridiculous amount of “dumb money” in the markets. Felix is proud that his money follows the market averages. Most large institutional money managers do the same, whether or not they admit to it. My guess is that most of the money in the system is attempting to track the index rather than attempting to make wise choices.

In such an environment, a new purchaser buying $1B of a particular issue might lead the “me too” purchasers to purchase an additional $2B or more of that issue, simply in an attempt to track the index?

It isn’t that simple, of course. It is never that simple. But is there another theory that explains the observed persistent irrationality in the markets? As best I can tell, people are buying bonds these days simply because they are there, not because they believe the risk/reward balance is favorable.

Posted by TFF | Report as abusive

The problems with measuring traffic congestion

Felix Salmon
Oct 17, 2012 18:25 UTC

Back in July, I gave a cautious welcome to TomTom’s congestion indices. The amount of congestion in any given city at any given time does have a certain randomness to it, but more data, and more public data, is always a good thing.

Or so I thought. I never did end up having the conversation with TomTom that I expected back in July, but I did finally speak to TomTom’s Nick Cohn last week, after they released their data for the second quarter of 2012.

In the first quarter, Edmonton saw a surprisingly large drop in congestion; in the second quarter it was New York which saw a surprisingly large rise in congestion. During the evening peak, the New York congestion index was 41% in the first quarter; that rose to 54% in the second quarter, helping the overall New York index rise from 17% to 25%. (The percentages are meant to give an indication of how much longer a journey will take, compared to the same journey in free-flowing traffic.) As a result, New York is now in 8th place on the league table of the most congested North American cities; it was only in 15th place last quarter, out of 26 cities overall.

So what’s going on here? A congestion index like this one serves two purposes. The first is to compare a city to itself, over time: is congestion getting better, or is it getting worse? The second is to compare cities to each other: is congestion worse in Washington than it is in Boston?

And it turns out that this congestion index, at least, is pretty useless on both fronts. First of all there are measurement issues, of course. Cohn explained that when putting together the index, TomTom only looks at road segments where they have a large sample size of traffic speeds — big enough to give “statistically sound results”. And later on a spokeswoman explained that TomTom’s speed measurements turn out to validate quite nicely with other speed measures, from things like induction loop systems.

But measuring speed on individual road segments is only the first step in measuring congestion. The next step is weighting the different road segments, giving most weight to the most-travelled bits of road. And that’s where TomTom data is much less reliable. After all, on any given stretch of road, cars generally travel at pretty much the same speed. You can take a relatively small sample of all cars, and get a very accurate number for what speeds are in that place. But if you want to work out where a city’s drivers drive the most and drive the least, then you need a much larger and much more representative sample.

And this is where TomTom faces its first problem: its sample is far from representative. Most of it comes from people who have installed TomTom navigation devices in their cars, and there’s no reason to believe those people drive in the same way that a city’s drivers as a whole do. Worse, most of the time TomTom only gets data when the devices are turned on and being used. Which means that if you have a standard school run, say, and occasionally have to make a long journey to the other side of town, then there’s a good chance that TomTom will ignore all your school runs and think that most of your driving is those long journeys. (TomTom is trying to encourage people to have their devices on all the time they drive, but I don’t think it’s had much success on that front.)

In general, TomTom is always going to get data weighted heavily towards people who don’t know where they’re going — out-of-towners, or drivers headed to unfamiliar destinations. That’s in stark contrast to the majority of city traffic, which is people who know exactly where they’re going, and what the best ways of getting there are. There might in theory be better routes for those people, and TomTom might even be able to identify those routes. But for the time being, I don’t think we can really trust TomTom to know where a city as a whole is driving the most.

I asked Cohn about the kind of large intra-city moves that we’ve seen in cities like Edmonton and New York. Did they reflect genuine changes in congestion, I asked, or were they just the natural variation that one sees in many datasets? Specifically, when TomTom comes out with a specific-sounding number like 25% for New York’s congestion rate, how accurate is that number? What are the error bars on it?

Cohn promised me that he’d get back to me on that, and today I got an email, saying that “unfortunately, we cannot provide you with a specific number”:

The Congestion Index is calculated at the road segment level, using the TomTom GPS speed measurements available for each road segment within each given time frame. As the sample size varies by road segment, time period and geography, it would be impossible to calculate overarching confidence levels for the Congestion Index as a whole.

It seems to me that if you don’t know what your confidence levels are, your index is pretty much useless. All of the cities on the list are in a pretty narrow range: the worst congestion is in Los Angeles, on 34%, while the least is in Phoenix, on 12%. If the error bars on those numbers were, say, plus-or-minus 10 percentage points, then the whole list becomes largely meaningless.

And trying to compare congestion between cities is even more pointless than trying to measure changes in congestion within a single city, over time. As JCortright noted in my comments in July, measuring congestion on a percentage basis tends to make smaller, denser cities seem worse than they actually are. If you have a 45-minute commute in Atlanta, for instance, as measured on a congestion-free basis, and you’re stuck in traffic for an extra half an hour, then that’s 67% congestion. Whereas if you’re stuck in traffic for 15 minutes on a drive that would take you 15 minutes without traffic, that’s 100% congestion.

Cohn told me that TomTom has no measure of average trip length, so he can’t adjust for that effect. And even he admitted that “comparing Istanbul to Stuttgart is a little strange”, even though that’s exactly what TomTom does, in its European league table. (Istanbul, apparently, has congestion of 57%, with an evening peak of 125%, while Stuttgart has congestion of 33%, with an evening peak of 70%.)

All of which says to me that the whole idea of congestion charging has a very big problem at its core. There’s no point in implementing a congestion charge unless you think it’s going to do some good — unless, that is, you think that it’s going to decrease congestion. But measuring congestion turns out to be incredibly difficult — and it’s far from clear that anybody can actually do it in a way that random natural fluctuations and errors won’t dwarf the real-world effects of a charge.

When London increases its congestion charge, then, or when New York pedestrianizes Broadway in Times Square, or when any city does anything with the stated aim of helping traffic flow, don’t be disappointed if the city can’t come out and say with specificity whether the plan worked or not. Congestion is a tough animal to pin down and measure, and while it’s possible to be reasonably accurate if you’re just looking at a single intersection or stretch of road, it’s basically impossible to be accurate — or even particularly useful — if you’re looking at a city as a whole.

COMMENT

Auros is right. Between counting cars going past specific points, and accurate point-to-point times, you can make some pretty good estimates of congestion, even if you don’t know the distribution of cars along each route.

Posted by AngryInCali | Report as abusive

How to protect New York from disaster

Felix Salmon
Sep 11, 2012 18:51 UTC

Today, September 11, is a day that all New Yorkers become hyper-aware of tail risk — of some monstrous and tragic disaster appearing out of nowhere to devastate our city. And so it’s interesting that the NYT has decided to splash across its front page today Mireya Navarro’s article about the risk of natural disaster — flooding — in New York.

Beyond the article’s publication date, Navarro doesn’t belabor the point. But in terms of the amount of death and destruction caused, a nasty storm hitting New York City could actually be significantly worse than 9/11. Ask anybody in the insurance industry: a hurricane hitting New York straight-on is the kind of thing which reinsurance nightmares are made of. And as sea levels rise in coming decades, the risks will become much worse: remember, it’s flooding from storm surges which causes the real devastation, rather than simply things blowing over in high winds.

So, what can or should be done? One option is to basically attempt to wall New York City off from the Atlantic Ocean:

A 2004 study by Mr. Hill and the Storm Surge Research Group at Stony Brook recommended installing movable barriers at the upper end of the East River, near the Throgs Neck Bridge; under the Verrazano-Narrows Bridge; and at the mouth of the Arthur Kill, between Staten Island and New Jersey. During hurricanes and northeasters, closing the barriers would block a huge tide from flooding Manhattan and parts of the Bronx, Brooklyn, Queens, Staten Island and New Jersey, they said.

Needless to say, this solution is insanely expensive: the stated price tag right now is $10 billion — well over $1,000 per New Yorker — and I’m sure that if such a project ever happened, the final cost would be much higher. And such barriers don’t last particularly long, either. London built the Thames Barrier in 1984, and there’s already talk about when and how it should be replaced. And building a single barrier across the Thames is conceptually and practically a great deal simpler than trying to hold back the many different ways in which the island of Manhattan is exposed to the water.

What’s more, there’s an environmental cost associated with barriers, as well as a financial cost. Which cuts against the kind of things which New York has been doing. They’re smaller, and much less robust. But they improve the environment, rather than making it worse. And they’re relatively cheap. For instance: installing more green roofs to absorb rainwater. Expanding wetlands, which can dampen a surging tide, even in highly-urban places like Brooklyn Bridge Park. Even “sidewalk bioswales”. (I’m a little bit unclear myself on exactly what those are, but they sound very green.)

Adam Freed, the outgoing deputy director of New York’s Office of Long-Term Planning and Sustainability, talks about making “a million small changes”, while always bearing in mind that “you can’t make a climate-proof city”. That’s a timely idea: we can’t make New York risk-free, and it’s not clear that it would make sense to do so even if we could. After all, as we all learned 11 years ago today, it’s impossible to protect against each and every source of possible devastation.

Other cities have similar ideas:

In Chicago, new bike lanes and parking spaces are made of permeable pavement that allows rainwater to filter through it. Charlotte, N.C., and Cedar Falls, Iowa, are restricting development in flood plains. Maryland is pressing shoreline property owners to plant marshland instead of building retaining walls.

Still, all of this green development does feel decidedly insufficient in comparison to the enormous risks that New York is facing. I like the idea of a “resilience strategy”, but there are still a lot of binary outcomes here, especially when it comes to tunnels. Either tunnels flood or they don’t — and if they do, the consequences can be really, really nasty. Imagine a big flood which took out all of the subway and road tunnels into Manhattan, or even just the subway tunnels across New York Bay as well as the Holland Tunnel. As such a flood becomes more likely, the cost of protecting against it with some big engineering work — insofar as such a thing is possible — becomes increasingly justifiable.

And this is just depressing:

Consolidated Edison, the utility that supplies electricity to most of the city, estimates that adaptations like installing submersible switches and moving high-voltage transformers above ground level would cost at least $250 million. Lacking the means, it is making gradual adjustments, with about $24 million spent in flood zones since 2007.

Lacking the means? What is that supposed to mean? New York City has a credit rating of Aa1 from Moody’s; ConEd has a crediting rating of A3. Interest rates are at all-time lows. There has never been a better time to invest a modest $250 million in helping to ensure that New York can continue to have power in the event of a storm. Doing lots of small things is all well and good, and I’m not convinced that the huge things are necessarily worthwhile — or even, in the case of moving people to higher ground, even possible. But the medium-sized things? Those should be a no-brainer right now.

COMMENT

When Irene came shooting up the Harbor, just such a scenario was possible.
Had the storm slowed down or altered course in such a way to intensify/prolong the surge, far more damage would have occurred. “Missed it by THAT much.”
That was the shot across the bow. No one seems to have taken notice.
Haven taken 6 inches of flooding in my apartment due to underground storm surge (that’s water surging through the ground from the nearby harbor), I am not likely to forget any time soon. At least we didn’t get sewage or 3-foot-deep flooding like some of our neighbors did.
If you really want to scare the hell out of yourself, look into earthquake scenarios. Thousands of unreinforced masonry buildings throughout the city. Brooklyn sitting on a “glacial moraine”, essentially a loose jangly pile of rocks left over from the last ice age. It only takes a shaker of about 5 to 6 on the Richter scale to trigger the worst disaster this country has ever seen: liquefaction, collapsed buildings, extreme catastrophe.
Luckily the frequency of such an event around here is every 300-600 years.
It could happen tomorrow or not for a few hundred years. No one knows, we won’t see it coming, and there’s no way to properly prepare for it.
Do you feel lucky?

Posted by bryanX | Report as abusive

Chart of the day, party neighborhood edition

Felix Salmon
Sep 6, 2012 13:45 UTC

Uber_weekend_model.jpg

This chart comes from Uber data geek (that is, a data geek who works for Uber) Bradley Voytek. You might recognize it from a blog post of Voytek’s from back in June, headlined “Building the Perfect Uber Party City”.

Uberdata_PCAdemandcurve.jpgWhat Voytek managed to do, back then, was create two “stereotyped patterns” of Uber car usage, based on something called principal component analysis. The first pattern he called “Weekend Component”, and it’s the chart you see above. The second pattern he calls “Weekday component”, and it looks very different indeed. (You can see the two overlaid on top of each other at right.)

Just by looking at these two curves — the red and the blue — Voytek can account for 93% of the way in which demand for Uber cars fluctuates over time. Some cities and neighborhoods are more Weekday; other cities and neighborhoods are more Weekend. (Most, it turns out, are more Weekend than Weekday, at least when it comes to demand for Ubers.) But just about everywhere comes very close to being a mix of the two, rather than something altogether different.

And there are some neighborhoods which correlate very strongly with the weekend curve in particular: Voytek calls these the “party neighborhoods”. In his post, he picked out the most “weekendish” neighborhoods in each of Uber’s cities: North Beach in San Francisco, Soho in New York, and so on. But I was interested in the league table. So, via Voytek, here’s the top 50:

City Neighborhood Weekend Index
Chicago Near North Side 89.51
San Francisco North Beach 88.75
Boston South Boston 87.59
Boston Back Bay-Beacon Hill 86.37
NYC Soho 86.03
DC Dupont Circle 85.80
San Francisco South Of Market 85.67
San Francisco Potrero Hill 85.67
Chicago Near West Side 85.62
DC Au-Tenleytown 85.44
DC Downtown 85.08
DC Georgetown 84.90
NYC Greenwich Village 84.81
NYC Tribeca 84.71
DC South West 84.63
NYC Financial District 84.53
DC Foggy Bottom 84.48
Los Angeles Santa Monica 84.38
DC Capitol Hill 84.30
NYC Clinton 84.27
NYC Chelsea 83.59
Boston East Cambridge 83.29
NYC Gramercy 82.85
Los Angeles Sawtelle 82.84
San Francisco Glen Park 82.62
Los Angeles Beverly Hills 82.58
Boston Central 82.25
Boston South End 82.09
San Francisco Chinatown 81.98
Seattle First Hill 81.51
San Francisco Financial District 81.29
Seattle Pioneer Square 81.27
NYC Midtown 81.12
DC Logan Circle 81.06
San Francisco Mission 80.96
Los Angeles Westwood 80.89
NYC Murray Hill 80.88
DC Brentwood 80.83
San Francisco Russian Hill 80.62
San Francisco Inner Sunset 80.48
DC Woodley Park 80.38
NYC Little Italy 80.34
Seattle Downtown 80.32
Chicago Lincoln Park 80.24
Seattle Capitol Hill 80.12
Los Angeles West Los Angeles 80.03
Los Angeles Mid City West 80.02
NYC Williamsburg 80.01
Los Angeles Mid Wilshire 79.73
Boston Fenway-Kenmore 79.62

The Weekend Index, here, is the degree to which Uber usage in the neighborhood in question resembles the red line in Voytek’s chart. Obviously, it’s not all nights and weekends, but it’s skewed that way. Sunday nights are very slow, and then each successive night picks up a bit, and goes on a little bit later, until you get big peaks on Friday and Saturday nights. And across the board, nighttime usage is much heavier than daytime usage.

Voytek also sent me a list of the least “weekendish” neighborhoods that Uber covers. They’re pretty dull, as you might expect. What you might not expect is that the top six are all on the west coast. At the top of the list is Outer Richmond, in San Francisco, followed by Roosevelt and Madrona in Seattle, Visitacion Valley in San Francisco, Greenwood in Seattle, and Leschi in Seattle. Nowhere in New York or Boston or DC even makes the top ten.

The big league table, however, of the most weekendish neighborhoods, is fascinating — just because those tend to be particularly (to use a word that Thomas Frank hates) vibrant. These are the neighborhoods that other cities aspire to; they’re the areas that cause people to want to move to a city, and make them willing to pay high rents to live there.

And if you ever wondered what were the best and worst nights to go out, this Uber chart should answer your question very simply: the later you get in the week, the more crowded any given place is likely to become. That’s pretty intuitive, but it’s always good to see intuitions backed up with empirical data — and it’s easy to see why restaurants that close one or two days a week always choose Sundays or Mondays.

COMMENT

I can only speak for San Francisco, but the inclusion of some of those neighborhoods (for instance, Potrero Hill) speaks only to the lack of availability of cabs.

Posted by absinthe | Report as abusive

Traffic congestion datapoints of the day

Felix Salmon
Jul 10, 2012 16:12 UTC

TomTom has released its first congestion indices today, comparing 31 cities in Europe and 26 cities in the US and Canada. (They call that North America, which is a bit disappointing, because I’d dearly love to see how Mexico City compares to other North American cities, and it’s not on the list.) The rankings are interesting, but even more interesting, to me, are the way that the rankings have changed over the past year.

Consider Edmonton, for instance: a town in the midst of a massive oil boom, where road construction can’t even begin to keep up with population growth. That was obvious back in September 2009, in the city’s transportation master plan:

As Edmonton evolves from a mid-size prairie city to a large metropolitan area, it is inevitable that congestion levels will increase, particularly during peak periods. Physical, financial and community constraints in many areas make it unfeasible or even undesirable to build or expand roads to alleviate congestion.

TomTom doesn’t give data as far back as 2009, but at least we can see what direction the city is moving in. Last year, Edmonton had a congestion index of 24%, which means that on average, travel times were 24% longer than they would take if traffic were flowing freely. That meant Edmonton was the 8th most congested city on TomTom’s list. This year, the Edmonton congestion index has plunged to just 13%, placing Edmonton 23rd out of the 26 cities, with an enormous decrease particularly during the evening rush hour:

edmonton.tiff

I have no idea why traffic in Edmonton has improved so much over the past year; certainly I wouldn’t have been at all surprised if it had gotten worse rather than better. But the point here is that there’s an important stochastic element to congestion. Consider New York: in 2008, Mike Bloomberg proposed a congestion charge, which passed muster with city legislators but which was ultimately killed in Albany. Again, we don’t have data for what congestion was like in 2008. But between 2011 and 2012, congestion rates in New York overall fell from 23% to just 17%: a very impressive improvement. And today, New York is only the 15th most congested city on the list — behind metropolitan areas like Tampa, Ottawa, and San Diego.

What’s happened in New York to cause the drop in congestion? You can’t say higher gas prices, since those are a nationwide phenomenon, and don’t explain the drop in relative congestion. Plus, congestion in North America overall has stayed stable at 20% even as gas prices have risen. So if it’s not gas prices, what is it? Could it be all those bike lanes? Could it be that John Cassidy needs to eat some crow, and admit that bike lanes reduce congestion, rather than increasing it?

Perhaps: the jury’s still out. And maybe what we’re seeing here is more a function of random variation, and less a function of anything under the control of New York’s Department of Transportation.

What this report does tell me is that it’s going to be very difficult indeed to judge how effective any congestion-charging system is, just by looking at what happens to congestion after such a charge is introduced. I’m sure that if Edmonton had introduced a congestion charge at the beginning of 2011, the city would have claimed a huge amount of credit for the drop in congestion that resulted. But in fact, as we’ve seen, that drop in congestion would have happened anyway.

I’m planning to talk to the people at TomTom next week, and I’ll ask them whether they have any bright ideas when it comes to separating out causative factors for changes in congestion. In the meantime, we now at least have reasonably reliable league tables for the least pleasant cities to drive in. In North America, you want to avoid Los Angeles and Vancouver; in Europe, you want to avoid pretty much every major city. (Stockholm and London, with congestion charges, both have 27% congestion rates, putting them on a par with the very worst US cities.) But especially avoid driving in Warsaw, Rome, and Brussels. They’re even worse than LA.

Update: JCortright, in the comments, makes the excellent point that these numbers are much better at showing congestion changes within a city than they are at comparing congestion between cities. If you have a 45-minute commute in Atlanta, for instance, as measured on a congestion-free basis, and you’re stuck in traffic for an extra half an hour, then that’s 67% congestion. Whereas if you’re stuck in traffic for 15 minutes on a drive that would take you 15 minutes without traffic, that’s 100% congestion. So this methodology makes denser, smaller cities (like Europe’s) look worse.

COMMENT

I live in London. I’ve said it before, and I’ll say it again: scooters are the way to go. More practical for longer distances, and with filtering, excellent in cities. I effectively don’t experience any congestion at all in London. You have to live quite a long ways out, and positioned right beside stations on either end of your trip, for any rail-based public transport to be remotely competitive. Trips that take an hour+ owing to bus to and from tube station at one end normally take less than 30 minutes, and the primary thing slowing you down is red lights.

@JustinCormack: I can’t speak for Copenhagen, but central Amsterdam is very small and doesn’t really accommodate cars at all. Probably the traffic flow is structurally different – if you draw straight lines between start and finish, I’d bet fewer would cross in Amsterdam.

Posted by BarryKelly | Report as abusive
  •