Opinion

Felix Salmon

A big red dog explains the fiscal cliff

Felix Salmon
Dec 12, 2012 20:11 UTC

The main problem with trying to explain the fiscal cliff, as I see it, is that people get far too caught up in the details — tax deductions, tax hikes, spending cuts, debt ceilings, and the like. Which are all important, but they’re not fundamentally what the austerity bomb is about. Rather, the reason that everybody’s worried about the effects of the fiscal cliff is simple Keynsian mathematics: if we cut spending and raise taxes, that means less economic activity — and a nasty recession, just when we can least afford it.

So this video is my attempt — with a big red dog, and Superman, and Batman — to get back to what really matters, and to try to underscore something quite interesting, which has been lost in the politics, which is that in terms of the deficit, both Obama and Boehner want something very similar. The deficit is big now — about $1.1 trillion — and they both want it to come down by roughly $200 billion, which is much less than what will happen automatically if they do nothing. In that case, the deficit would plunge by a disastrous $500 billion or so.

Deficits are a good thing, in terms of economic stimulus, and taking away a large deficit too quickly is a great way of causing a recession. I do understand that at some point deficits become a bad thing, especially if the bond markets decide that there’s a real question mark over whether all that borrowing can ever be repaid. But we’re not at that point yet. So it falls to Barack Obama and John Boehner to come together to prevent an entirely avoidable recession. They can do it, and they will do it. But we’ll have to suffer a lot of sturm und drang — not to mention gimmicky YouTube videos — before we get there.

COMMENT

So… The federal government isn’t part of the general economy? Those lenders will forever be willing to pump $800B a year into the economy?

Perhaps a better model would be one of a “liquidity overhang”. The town is flooding, so we are pumping water uphill to a massive reservoir sitting above the town. If we stop, our toes will get wet. So we keep the pumps going at $800B a year… We’ve read of dams breaking elsewhere, of towns getting washed away in the flood, but we know that THIS dam is different and will never break. So we don’t even bother checking for cracks and keep pumping away.

How does that story end? One possibility is for Clifford to swoop in and drop a grenade on the pumps. Yes, our feet will get wet. The town will flood a bit. But we’ve survived that before and can survive it again.

Another possibility is to hope that the waters threatening the town will eventually recede, allowing us to begin draining the reservoir without flooding. But it is still raining…

We know the third possibility, we just don’t care to face it. Too scary, far worse than the first option.

Posted by TFF | Report as abusive

Should gas prices be soaring?

Felix Salmon
Nov 1, 2012 17:54 UTC

Traffic is flowing in New York again this morning, for four reasons. First is the ban on private cars entering the island if they’re carrying fewer than three people. Second is the subways, which have started working again, in a limited manner. Third is a noticeable increase in bicyclists, even between yesterday and today: I have real hope that Sandy might persuade a whole swath of new people that bike commuting is incredibly fast and easy. And then there’s a much more mundane fourth reason: people are running out of gas.

With much critical infrastructure still out, many gas stations don’t have electricity to pump gas, and most of the rest — at least in New York and New Jersey — have sold out, as people filled up not only their cars but any other vessels they could find. Gasoline is precious right now: it powers generators which pump out flooded buildings, as well as powering the one form of transportation which is capable of getting stuff from Brooklyn or New Jersey into Manhattan. As for when new supplies might arrive, that’s extremely vague, but the consensus seems to be Saturday.

There’s something self-fulfilling about gas shortages: they’re the crisis equivalent of a bank run. So long as everybody just goes about their day in a normal manner, refilling their tank only when they get low, everything goes smoothly. But when people start thinking that there might not be enough to go around, everybody panics and rushes to the stations: while shortages in New Jersey have real Sandy-related causes, shortages in places like Westchester are essentially the product of self-fulfilling fears.

The standard econowonk response to such things (see e.g. Yglesias, or for that matter Uber, which has reverted to “surge pricing”) is that if the market were just left to its own devices, none of this would happen. Prices fluctuate in response to changes in supply and demand, so when supply goes down and demand goes up, it’s only natural that they should rise to the point at which demand starts falling off and the two finally meet again. At that point, the gas stations will still sell most of their gas, but there won’t be massive lines at the pump, and there will always be gas — at a price — for those who really need it.

It doesn’t help matters that such arguments tend to come from the kind of people who can afford to pay extra for their essentials. Crises always hit the poor worse than the rich, even when prices don’t rise: in my own NYC neighborhood, for instance, there is tragic human suffering right now, even as I have a warm and comfy office to go to, a group of rich friends with spare rooms, and enough money to pay for a hotel or Airbnb room should I need it. If the Lower East Side’s handful of open bodegas started price-gouging just because they had a captive audience, that would only exacerbate the situation — quite aside from the fact that it would also open them up to the very real risk of grievous bodily harm.

At times like this, the charitable impulse, of helping out where and when you can, is very strong: hence the articles with headlines like “Donate to Hurricane Sandy Victims with a Simple Text Message”. Even as gas stations are running dry, individuals across the northeast are siphoning out gas from their cars’ tanks to provide fuel for people who need it more than they do. They’re not charging a market-clearing price for doing so: in fact, they’re not charging any money at all. Multiplied thousands of times, such small individual acts of charity make a huge difference.

Meanwhile, during a crisis, the opportunity costs involved in sitting in a long line for gas actually fall substantially. Yglesias is right that price-gouging would reduce those costs, but they are the least of the damage that a storm like this causes. Much more important is the feeling that your neighbors are rallying together at a very hard time. If we all run out of gas, we’ll all run out of gas. But we won’t try to profiteer from that, and we’ll try to use it in as effective a way as possible, rather than just letting it get acquired by whomever happens to be most price-insensitive.

That’s why there’s something a bit distasteful about Uber’s insistence that the only way they can provide a good service these days is by charging more money. The cost of gas has not gone up: either their drivers can find the gas to drive people around or they can’t. The drivers, for their part, already make good money from Uber, whose prices are high; doubling those prices seems excessive, especially when there’s a strong impulse at times like these to help people out without charging any money at all.

I’m not taking a position here on price-gouging laws: while I don’t like the practice, I’m also not convinced that it should be made illegal. But the fact is that the benefits of price-gouging tend to accrue to a handful of merchants and the price-insensitive rich, while the costs are borne by those who can least afford it. ‘Twas always thus, or course, but during a crisis, especially, it’s a good idea to try to minimize such mechanisms, rather than trying to encourage them.

COMMENT

@ceanf has a good point. While paying a couple extra dollars per gallon for a couple weeks won’t kill anybody’s budget (might cost them $50 or $100?), it would be a HUGE incentive to ship in supplies. You would have gasoline trucks lining up on the highways for the chance to dump their contents at that kind of a markup (in what is normally a slim-margin distribution business).

Both supply and demand are potentially elastic, and in the absence of a real shortage a modest shift in price ought to rapidly cure any imbalance.

Posted by TFF | Report as abusive

What would happen if investments in people succeeded?

Felix Salmon
Oct 22, 2012 20:54 UTC

Daniel Friedman has a question:

There are now a few companies — Upstart and Lumni to name two — trying to create a market where we can invest in people in exchange for a percentage of future income. If they succeed and reach scale, will student debt problems be alleviated? Will our future incomes be reliably predicted by newly developed algorithms? Will it encourage increased risk-taking?

The first thing to note here its that this is a market which does not exist. Lumni tried to gin up some interest in it and failed; at this point they won’t even return my calls or emails. Upstart has a bunch of high-profile venture backing (Google Ventures, Kleiner Perkins), and has even managed to launch a pilot with a few hand-picked students and investors. But it’s still very, very early days. Equity-in-people is an idea which comes around occasionally, but so far it has never really taken off, and there’s not much reason to believe that this time is different.

But still, it’s an interesting question: what would the consequences be if Upstart, and/or companies like it, became big and successful?

Firstly, Friedman asks, would student-debt problems be alleviated? The answer to this one, I think, is quite clearly no, for two reasons. The main reason is that tuition fees are induced, in much the same way that traffic is. If you build a nice big new road, traffic will appear to fill it up. And if you build a nice new source of funding for students, universities will raise their tuition fees so as to capture as much of that funding as possible. Yes, high student debt is a consequence of high tuition fees — but at the same time, high tuition fees are a consequence of the ready availability of student loans.

It’s worth pointing out here that Upstart’s model gives out funds at the end of a student’s four-year education, rather than at the beginning. If the student has lots of debt, and wants less debt, then she can use the funds to pay down some of that debt. But many students will choose to spend the money in other ways, simply layering the new equity funding on top of their old debt funding. Alternatively, if the expectation is that students will use their Upstart funds to pay down debt, then that only means, at the margin, that lenders will feel free to extend even more credit in the run-up to the early cash-out.

The other reason why student-debt problems won’t be alleviated is that Upstart funds are debt, if you look hard enough. The documentation is subject to change, but at heart would-be Upstarts are always going to find something along the lines of this in any Upstart contract:

YOU UNDERSTAND AND ACKNOWLEDGE THAT THE FUNDING AMOUNT YOU RECEIVE FROM US, IS A DEBT TO US.

The debt is not a typical loan, of course, and it can end up being repaid at less than face value, if the Upstart enters a poorly-remunerated career. But it’s still a debt. And if the student ends up defaulting on that debt, the penalties are generally enormous. When people take out loans, they nearly always have the best of intentions, and don’t think there’s any way they will end up defaulting — especially if the repayment amounts are tied to their income. But defaults happen. And when they do, anybody who has defaulted to Upstart is going to find out the hard way that Upstart loans are the most expensive of them all.

So at the margin, these kind of schemes are likely to exacerbate the student-loan problem: they’re hurting, rather than helping. Remember that ultimately the money for these schemes is coming from investors, who are taking a substantial risk and being promised returns significantly greater than anything seen in the debt markets. Remember too that a significant portion of the students will end up repaying less than they’re originally given — because they go into low-paying occupations, perhaps, or maybe because they just decide they’d rather settle down and have a family instead of a career. As a result, those students who do go into decently-paying careers will end up repaying sometimes double or triple the amount of money they were originally given. That doesn’t seem like an alleviation of student debt problems to me — not when we’re talking about sums in the $30,000 range. It sounds like piling an extra $60,000 of liabilities on top of all the existing student loans you might have.

Friedman’s second question is whether companies like Upstart will start being able to predict students’ future incomes. That’s certainly something that Upstart is spending quite a lot of time on, but I think the effort is premature: first you need to find out whether students are happy repaying the funds at all. And I, for one, don’t think that Upstart’s algorithms are going to be particularly good at predicting incomes.

For one thing, the students with predictable incomes — the doctors and lawyers and the like — are never going to accept Upstart’s offer. For another, there will always be a certain number of students who are looking to game the system. All these schemes have an adverse-selection problem: Upstart funding is going to be much more attractive to students who are pretty sure they’re never going to make much money. If you aspire to being a part-time primary-school substitute teacher, for instance, while spending the rest of your time working on the Great American Novel, then Upstart might be great for you — and it’s unlikely that any algorithm would be able to capture that.

What’s more, the biggest risk, from Upstart’s point of view, is that they’ll end up funding a student who marries someone very successful, gets pregnant pretty early on, and then never returns to the workforce. This still happens frequently enough that an honest algorithm would charge higher interest rates to women than to men. As a result, the algorithm can’t actually be honest.

Finally, there’s the group of students who want to follow the Silicon Valley dream and become entrepreneurs straight out of college. That’s generally a great time to found a company, and Upstart (slogan: “The startup is you“) is positively encouraging such people. But equally, the chances that any given person will have success as an entrepreneur are pretty much random: such things come mostly down to luck, and can’t be reduced to some algorithm. Especially when the really successful entrepreneurs are going to be the ones who can afford lawyers to shield their income from Upstart.

There is no algorithm which can tell you who’s going to be a successful entrepreneur and who isn’t. Some college dropouts are college dropouts; other college dropouts are Steve Jobs, or Bill Gates, or Mark Zuckerberg. Being dyslexic is generally not a great thing when it comes to total lifetime income, and yet dyslexics are massively overrepresented among highly-successful CEOs. If you’re looking for outliers — and successful entrepreneurs, pretty much by definition, are all outliers — then you can’t really start generalizing.

Friedman finishes by asking if this model will encourage increased risk-taking. I think the answer there depends on what you mean by risk-taking. It certainly encourages people to accept low-paying jobs with payoffs which get put into corporate shell vehicles or which don’t end up cashing out for a decade. And at the other end of the spectrum, it encourages people to take low-paying jobs which pay non-financial dividends: ski instructor, say. But in general, I don’t think that a new form of private income tax is a great way to encourage innovation. Generally, if you start taxing something, you get less of it, not more of it. And I see no reason why this model should work out any differently.

COMMENT

Private Equity: the way for smart, rich lazy motherflippers to get even richer.

Public investment in people? No percentage in that. Communism by another name.

Remember: passive income is massive income!

Posted by Eericsonjr | Report as abusive

When Taleb met Davies

Felix Salmon
Oct 19, 2012 21:59 UTC

This morning, Nassim Taleb returned to Twitter, posting one of the technical appendices to his new book. And immediately he got into a wonderfully wonky twitterfight/conversation with Daniel Davies.

I don’t pretend to understand all the subtleties of the conversation between the two, but, for Tom Foster, here’s an attempt. Davies has promised a Crooked Timber post on other parts of the appendix; I’m really looking forward to that.


COMMENT

Actually a positive first derivative for the utility function (harm is bad, etc.) + stochasticity giving a nonzero probability for the state may be sufficient to reverse the allocation.

Posted by NassimNTaleb | Report as abusive

How to protect New York from disaster

Felix Salmon
Sep 11, 2012 18:51 UTC

Today, September 11, is a day that all New Yorkers become hyper-aware of tail risk — of some monstrous and tragic disaster appearing out of nowhere to devastate our city. And so it’s interesting that the NYT has decided to splash across its front page today Mireya Navarro’s article about the risk of natural disaster — flooding — in New York.

Beyond the article’s publication date, Navarro doesn’t belabor the point. But in terms of the amount of death and destruction caused, a nasty storm hitting New York City could actually be significantly worse than 9/11. Ask anybody in the insurance industry: a hurricane hitting New York straight-on is the kind of thing which reinsurance nightmares are made of. And as sea levels rise in coming decades, the risks will become much worse: remember, it’s flooding from storm surges which causes the real devastation, rather than simply things blowing over in high winds.

So, what can or should be done? One option is to basically attempt to wall New York City off from the Atlantic Ocean:

A 2004 study by Mr. Hill and the Storm Surge Research Group at Stony Brook recommended installing movable barriers at the upper end of the East River, near the Throgs Neck Bridge; under the Verrazano-Narrows Bridge; and at the mouth of the Arthur Kill, between Staten Island and New Jersey. During hurricanes and northeasters, closing the barriers would block a huge tide from flooding Manhattan and parts of the Bronx, Brooklyn, Queens, Staten Island and New Jersey, they said.

Needless to say, this solution is insanely expensive: the stated price tag right now is $10 billion — well over $1,000 per New Yorker — and I’m sure that if such a project ever happened, the final cost would be much higher. And such barriers don’t last particularly long, either. London built the Thames Barrier in 1984, and there’s already talk about when and how it should be replaced. And building a single barrier across the Thames is conceptually and practically a great deal simpler than trying to hold back the many different ways in which the island of Manhattan is exposed to the water.

What’s more, there’s an environmental cost associated with barriers, as well as a financial cost. Which cuts against the kind of things which New York has been doing. They’re smaller, and much less robust. But they improve the environment, rather than making it worse. And they’re relatively cheap. For instance: installing more green roofs to absorb rainwater. Expanding wetlands, which can dampen a surging tide, even in highly-urban places like Brooklyn Bridge Park. Even “sidewalk bioswales”. (I’m a little bit unclear myself on exactly what those are, but they sound very green.)

Adam Freed, the outgoing deputy director of New York’s Office of Long-Term Planning and Sustainability, talks about making “a million small changes”, while always bearing in mind that “you can’t make a climate-proof city”. That’s a timely idea: we can’t make New York risk-free, and it’s not clear that it would make sense to do so even if we could. After all, as we all learned 11 years ago today, it’s impossible to protect against each and every source of possible devastation.

Other cities have similar ideas:

In Chicago, new bike lanes and parking spaces are made of permeable pavement that allows rainwater to filter through it. Charlotte, N.C., and Cedar Falls, Iowa, are restricting development in flood plains. Maryland is pressing shoreline property owners to plant marshland instead of building retaining walls.

Still, all of this green development does feel decidedly insufficient in comparison to the enormous risks that New York is facing. I like the idea of a “resilience strategy”, but there are still a lot of binary outcomes here, especially when it comes to tunnels. Either tunnels flood or they don’t — and if they do, the consequences can be really, really nasty. Imagine a big flood which took out all of the subway and road tunnels into Manhattan, or even just the subway tunnels across New York Bay as well as the Holland Tunnel. As such a flood becomes more likely, the cost of protecting against it with some big engineering work — insofar as such a thing is possible — becomes increasingly justifiable.

And this is just depressing:

Consolidated Edison, the utility that supplies electricity to most of the city, estimates that adaptations like installing submersible switches and moving high-voltage transformers above ground level would cost at least $250 million. Lacking the means, it is making gradual adjustments, with about $24 million spent in flood zones since 2007.

Lacking the means? What is that supposed to mean? New York City has a credit rating of Aa1 from Moody’s; ConEd has a crediting rating of A3. Interest rates are at all-time lows. There has never been a better time to invest a modest $250 million in helping to ensure that New York can continue to have power in the event of a storm. Doing lots of small things is all well and good, and I’m not convinced that the huge things are necessarily worthwhile — or even, in the case of moving people to higher ground, even possible. But the medium-sized things? Those should be a no-brainer right now.

COMMENT

When Irene came shooting up the Harbor, just such a scenario was possible.
Had the storm slowed down or altered course in such a way to intensify/prolong the surge, far more damage would have occurred. “Missed it by THAT much.”
That was the shot across the bow. No one seems to have taken notice.
Haven taken 6 inches of flooding in my apartment due to underground storm surge (that’s water surging through the ground from the nearby harbor), I am not likely to forget any time soon. At least we didn’t get sewage or 3-foot-deep flooding like some of our neighbors did.
If you really want to scare the hell out of yourself, look into earthquake scenarios. Thousands of unreinforced masonry buildings throughout the city. Brooklyn sitting on a “glacial moraine”, essentially a loose jangly pile of rocks left over from the last ice age. It only takes a shaker of about 5 to 6 on the Richter scale to trigger the worst disaster this country has ever seen: liquefaction, collapsed buildings, extreme catastrophe.
Luckily the frequency of such an event around here is every 300-600 years.
It could happen tomorrow or not for a few hundred years. No one knows, we won’t see it coming, and there’s no way to properly prepare for it.
Do you feel lucky?

Posted by bryanX | Report as abusive

Chart of the day, employment-status edition

Felix Salmon
Sep 7, 2012 14:59 UTC

status.jpg

There are two ways in which the national employment situation influences the election. The first is, simply, the effect of unemployment and underemployment on America’s animal spirits. People who are unemployed, or who are so discouraged that they’re not even looking for work any more, don’t tend to be very happy with their lot, and as a result are more likely to vote against the current president. It may or may not be fair, but the president does get blamed for current economic conditions, and arguments about first derivatives (“it’s bad, but it’s getting better”) or counterfactuals (“it’s bad, but it’s better than it would have been under the other guys”) tend to be pretty unpersuasive to voters.

On this level, today’s employment report is pretty gruesome. According to the establishment survey, employers added just 96,000 jobs this month — less than the amount needed just to keep up with population growth. According to the household survey, the size of the civilian labor force shrank by 368,000 people last month. And the number of people not in the labor force grew by an absolutely massive 581,000.

Right now, the proportion of Americans with a job is lower than it has been in over 30 years. America’s getting older, and you’d expect the number to be falling — but it shouldn’t be falling nearly as fast as this. We’re well below trend, when it comes to the employment-to-population ratio, and that’s really bad for the economy as a whole: it means we have fewer productive workers, and as a result the country is creating much less wealth than it could be creating if more people had jobs. At the margin, of course, anything that depresses the amount of wealth in the country is bad for the incumbent president.

So anybody trying to use the jobs report to handicap the result of the election should probably see a tick down, this morning, in the chances of Obama’s re-election. My feeling is, however, that the size of the tick is likely to be very small. These things depend much more on levels than on deltas, and in any case the current electorate is more polarized than ever, with political convictions which are hard to shake.

Which brings me to the second way that employment affects electoral outcomes. The employment numbers are reported, on the first Friday of every month, and political parties try to use the numbers to their best advantage. On this front, there’s really only one number that matters, and that’s the headline unemployment rate. Financial types care more about the payrolls number, because it’s more accurate and less fuzzy. The unemployment rate, by contrast, is harder to calculate, and is based on the idea that you’re only unemployed if you’re looking for work. But the fact is that from a rhetorical perspective, the unemployment rate is the thing which counts. And so in terms of the optics of today’s report, it’s good for Obama, just because the unemployment rate fell — to 8.1% this month from 8.3% last month and 9.1% a year ago.

That’s still well above the 7% at which the psephologists will tell you that it’s very hard for an incumbent to get reelected. And it still starts with an 8 — although there’s now a small chance that on the day we actually vote, the unemployment rate might start with a 7. But the Republicans can’t say that the unemployment rate is rising, and the Democrats can say that it is falling. Will that change votes? Again, not very many. But insofar as arguments have an effect on elections, this report — bad though it is — has failed to give the Republicans the kind of rhetorical ammunition they might have hoped for.

Underlying both of these dynamics is the way in which the story of discouraged workers — people falling out of the labor force entirely — has become increasingly important, to the point at which it makes the headline unemployment rate much less useful as an economic indicator. Once upon a time, if you didn’t have a job, you fell into one of two categories: either you didn’t want to work, or else you were looking for work. Nowadays, however, there’s a huge third category of discouraged workers who would love a job but don’t even see the point of looking any more.

The Bureau of Labor Statistics has an interesting-if-obscure data series called “labor force status flows”. Most of the people interviewed in the survey measuring the unemployment rate, it turns out, were also interviewed the previous month. So it’s possible to look at the number of people, on a month-to-month basis, who were unemployed last month and who were no longer in the labor force this month. Historically, that number has been somewhere between 1.5 million and 2 million per month, on a seasonally-adjusted basis. But when the recession hit, it spiked to more than 2.5 million, and even more than 3 million at the peak. And it’s still extremely high.

It’s natural for lots of unemployed people to move out of the labor force each month: the Boomers are retiring, after all. But a glance at this chart is all it takes to see that we’re well outside normal territory, and that we’re still seeing millions of people leave the labor force not because they want to but because they feel that there’s simply no point in looking for work any more. I don’t know when or whether this line will come back down to its historical levels. But so long as it’s as elevated as this, the Federal Reserve has its work cut out. Because it means that even if the unemployment rate comes down substantially, we still won’t have really reached full employment — not unless the size of the labor force increases substantially at the same time.

COMMENT

Who eill the unempoled vote for ?
A promise of jobs ?
A promise of handouts ?
Working is work maybe a handout is easier

Posted by whyknot | Report as abusive

Why you won’t find hyperinflation in democracies

Felix Salmon
Sep 4, 2012 03:29 UTC

There are those who believe that the length of a mathematical paper is inversely proportionate to how interesting it is. Something similar can be said about the new paper — short and absolutely first-rate — from Steve Hanke and Nicholas Krus, entitled “World Hyperinflations“. It’s technically 19 pages long, but the first 12 are basically just throat-clearing, and the last two are references. The meat is the five pages in the middle: three pages of tables, and another two of footnotes, detailing every instance of hyperinflation that the world has ever seen.

Hyperinflation, here, has a clear quantitative definition: prices rising by at least 50% per month. (Remember that, the next time some scaremonger starts talking about how US monetary policy risks causing hyperinflation.) And after some three years’ work, Hanke and Krus have managed to come up with an exhaustive list of every hyperinflationary episode in history — 56 in all, or 57 if you include North Korea in early 2010, where the data aren’t solid enough to merit inclusion in the list.

Every entry gets its own footnote, and while there are a lot of relatively easy-to-obtain IMF publications in there, there’s also no shortage of much more obscure source material: Simeun Vilendecic’s Banking in Republika Srpska in the late XX and early XXI century, for instance, or Abram van Heyningen Hartendorp’s 1958 History of Industry and Trade of the Philippines.

The earliest hyperinflation on the list came in France, at the end of the 18th Century, when inflation hit a monthly rate of 304% in mid-August 1796. The famous Weimar hyperinflation in Germany is pegged as taking place between August 1922 and December 1923; it reached a monthly peak of 29,500% in October 1923, with prices increasing at 20.9% per day, and doubling every 3.7 days. And the longest period of hyperinflation comes in Greece, which saw hyperinflation for a whopping 55 months, from May 1941 to December 1945. There’s no particular reason, looking at this list, why Germany should have been particularly scarred by hyperinflation, to the point at which it fiercely attacks even the possibility of relatively modest inflation, where France and Greece (not to mention Hungary or China or Argentina) have been much less deeply affected.

There is, however, a very strong correlation between the length of time that a period of hyperinflation goes on, and the levels that it can reach at its height. If you look at the top six hyperinflations on the list — which include both Germany and that 55-month period in Greece — all but one lasted for longer than a year. Meanwhile, five of the bottom six hyperinflations took place in just a single month, with the sixth lasting just three months.

At their highest, the numbers start to beggar the imagination: in mid-November 2008, for instance, inflation in Zimbabwe reached a monthly rate of 79,600,000,000%. That’s 79 billion percent per month. At that rate, prices pretty much double every day. And Zimbabwe doesn’t even manage to grab the top spot: in July 1946, Hungary saw hyperinflation of 41,900,000,000,000,000%. That’s 42 quadrillion percent in one month, with prices doubling every 15 hours.

The real value of this paper is its exhaustive nature. By looking down the list you can see what isn’t there — and, strikingly, what you don’t see are any instances of central banks gone mad in otherwise-productive economies. As Cullen Roche says, hyperinflation is caused by many things, such as losing a war, or regime collapse, or a massive drop in domestic production. But one thing is clear: it’s not caused by technocrats going mad or bad.

For that matter, there are no hyperinflations at all in North America: the closest we’ve come, geographically speaking, was in Nicaragua, from 1986-91. In fact, if you put to one side the failed states of Zimbabwe and North Korea, there hasn’t been a hyperinflation anywhere in the world since February 1997, more than 15 years ago, despite the enormous number of heterodox central-bank actions in that time.

All of which is to say that hyperinflation, in and of itself, really isn’t anything to worry about. It’s pretty much impossible to predict — and if your country has hyperinflation, it almost certainly has even bigger other problems. In fact, I’d hesitate to categorize hyperinflation as a narrowly economic phenomenon at all, as opposed to simply being a symptom of much bigger failures at the geopolitical level. Those failures are exacerbated by hyperinflation, of course: there’s very much a vicious cycle in these episodes. But you only ever find hyperinflation under extreme conditions, and, with a single exception (Peru), I’m not even sure I can find any genuine democracies on this list.

Update: As many people have helpfully pointed out, Weimar can definitely be considered a genuine democracy, even if it was suffering extraordinary geopolitical burdens.

COMMENT

IASB defines hyperinflation as about 100% over a three year period.

While 50% per month can, without a doubt, be characterized by hyperinflation I believe a far lower percentage qualifies.

Inflating our troubles away is the only solution sufficiently expedient for cowardly politicians (i.e. ours). It will be characterized as “growing out of our debt” but it will be what we all know it to be.

Posted by EvoShandor | Report as abusive

Annals of dubious research, 401(k) loan-default edition

Felix Salmon
Aug 13, 2012 05:13 UTC

Bob Litan, formerly of the Kauffman Foundation and the Brookings Institution, has recently taken up a new job as director of research for Bloomberg Government, where he’s going to have to be transparent and impartial. But one of his last gigs before moving to Bloomberg — a paper on the subject of people borrowing money from their 401(k) accounts — was neither of those things.

To understand what’s going on here, first check out Jessica Toonkel’s article from Friday about Tod Ruble and his company, Custodia.

Tod Ruble is trying to sell retirement plan insurance that employers say they do not want and their employees may not need.

But the Dallas-based veteran commercial real estate investor is not letting that stop him. Since late 2010, he has started up a company, Custodia Financial, and spent more than $1 million pushing for legislation that would allow companies to automatically enroll employees who borrow from their 401(k) plans in insurance that could cost hundreds of dollars a year.

Once you’ve read that, go back and check out a spate of stories that hit a series of major news outlets in July. Alan Farnham of ABC News, for instance, ran a story under the headline “401(k) Loan Defaults Skyrocket”:

A new study estimates that such defaults might total $37 billion a year, a sharp increase from 2007, when defaults totaled only $665 million.

Similarly, check out Walter Hamilton, in the Chicago Tribune (and LA Times): the headline there is “Defaults on 401(k) loans reach $37 billion a year”. At Time, Dan Kadlec also ran with the $37 billion number, saying that “the default rate on these loans has skyrocketed since the recession”. Similar stories came from Blake Ellis at CNN Money (“Loan defaults drain $37 billion from 401(k)s each year”), Mitch Tuchman at MarketRiders (“401k Loan Default Time Bomb Is Ticking”), and many others.

The only hint of skepticism came from Barbara Whelehan at BankRate. She noted that the study cited Kevin Smart, CFO of Custodia Financial, as a source — and she also noted that “it would be a boon for the insurance industry to get the rules changed, and it is working behind the scenes to do just that. In April, Custodia Financial submitted a statement to the House & Ways Committee arguing for automatic enrollment into insurance coverage for 401(k) loans.”

Whelehan also smelled something fishy in the way the paper was paid for:

This paper by Navigant Economics, which made a big splash in the press, was financially supported by Americans for Retirement Protection. That organization has a website, ProtectMyRetirementBenefits.com, but no “about us” link. It does give you the opportunity to sign a petition demanding protection of retirement funds through insurance. Take a look at it, and see if you think the website was created by average Americans or by the insurance industry.

Whelehan was actually breaking news here: there’s no public linkage between Americans for Retirement Protection, the organization which paid for the paper, and the astroturf website. In fact, Americans for Retirement Protection seems to have no public existence at all, beyond a footnote in the paper, which was co-authored by Bob Litan and Hal Singer.

Enter Toonkel, writing her story about Custodia. In the course of her reporting, she discovered — and Custodia confirmed — that Americans for Retirement Protection, and ProtectMyRetirementBenefits.com, are basically alter egos of Custodia itself. Custodia would welcome other organizations joining in, but that’s unlikely to happen, because Custodia owns the patents on the big idea that the paper and the website are pushing — the idea that 401(k) loans should come bundled with opt-out insurance policies.

Once you’re armed with this information, it’s impossible not to look at the Litan-Singer paper in a very different way. Its abstract concludes: “We demonstrate that the social benefits of steering (but not compelling) plan participants towards insurance when they borrow are likely positive and economically significant.” And yet nowhere in the paper is there any indication that it was bought and paid for by the very company which has a patent on doing exactly that.

And what about that $37 billion number? Are defaults on 401(k) loans really as big a problem as the paper says that they are? After all, the smaller the problem, the less important it is to introduce an expensive fix for it.

The simple answer is no: 401(k) loan defaults are not $37 billion per year. But the fact is that nobody knows for sure exactly where they are, which makes it much easier to come up with exaggerated estimates. As the paper itself admits, “the sum total of 401(k) defaults ought to be an easily accessible statistic, but it is not”. And the $37 billion, far from being a good-faith estimate, in fact looks very much like an attempt to get the largest and scariest number possible.

So how did Litan and Singer arrive at their $37 billion figure? Let’s start with the only concrete numbers we have — the ones from the Department of Labor, whose most recent Private Pension Plan Bulletin gives a wealth of information about all private pension plans in the country. Every pension plan has to file something called a Form 5500, and the bulletin aggregates all the numbers from all the 5500s which are filed; the most recent bulletin gives data from 2009.

This bulletin has two datapoints which are germane to this discussion. First of all, there’s Table A3, on page 7 of the bulletin (page 11 of the PDF). That shows that loans from defined-contribution pension plans to their own participants totaled $51.7 billion in 2009. Secondly, there’s Table C9, the aggregated income statement for the year. If you look at page 32 of the bulletin (page 35 of the PDF), you’ll see a line item called “deemed distribution of participant loans”, which came to $670 million for the year. If you borrow money from your 401(k) and you don’t pay it back, then that money is deemed to have been distributed to you, and counts as a default. So we know that the official size of 401(k) defaults in 2009 was $670 million — a far cry from Litan and Singer’s $37 billion.

Now the $670 million figure does not account for all 401(k) defaults. Most importantly, in some situations, if you default on a 401(k) loan after having been fired from your job, then the money is counted as an “actual distribution” rather than as a “deemed distribution”.

The Litan-Singer paper goes into some detail about this. “According to a recent study by Smart (2012),” they write, “although Form 5500 reflects actual distributions, there is no way to determine the amount of actual defaults.” They then look in detail at Smart’s figures, footnoting him five consecutive times, and treating him as an undisputed authority on such matters. Their citation is merely “Kevin Smart, The Hidden Problem of Defined Contribution Loan Defaults, May 2012.”

Where might someone find this paper? Here, since you ask: it’s helpfully hosted at CustodiaFinancial.com. And on the front page of the paper, Kevin Smart is identified as the “Chief Financial Officer, Custodia Financial”.

There’s no indication whatsoever in the Litan-Singer paper that the “Smart” they cite so often is the CFO of Custodia Financial, the company which has the most to gain should their recommendation be accepted. And there’s certainly no indication that he’s essentially their employer: that Custodia paid them to write this paper. In fact, the name Custodia appears nowhere in the Litan-Singer paper at all.

It’s instructive to look at the Smart paper’s attempt to estimate the magnitude of the 401(k) default problem. I’ll simplify a little here, but to a first approximation, Smart assumes that 12% of people with 401(k) loans lose their jobs. He also assumes that if you lose your job when you have a 401(k) loan, there’s an 80% chance you’ll default on that loan. As a result, he comes up with a 9.6% default rate on 401(k) loans. He then multiplies that 9.6% default rate by total 401(k) loans of $51.7 billion, adds in some extra defaults due to death and disability, and comes up with a grand total of $6.2 billion in loan defaults per year, excluding the “deemed distributions” of $670 million. Call it $7 billion in total, of which $6 billion could be protected by insuring loans against unemployment, death, and disability.

Now remember that this is a paper written by the CFO of Custodia Financial — someone who clearly has a dog in this race. It’s in Smart’s interest to make the loan-default total look as big as possible, since the bigger the problem, the more likely it is that Congress will agree to implement Custodia’s preferred solution.

But when Litan and Singer looked at Smart’s paper, they weren’t happy going with his $6 billion figure. Indeed, as we’ve seen, their paper comes up with a number six times larger. So if even the CFO of Custodia only managed to estimate defaults at $6 billion, how on earth do Litan and Singer get to $37 billion?

It’s not easy. First, they double the total amount of 401(k) loans outstanding, from $52 billion to $104 billion. Then, they massively hike the default rate on those loans, from 9.6% to 17.9%. And finally, they add in another $12 billion or so to account for the taxes and penalties that borrowers have to pay when they default on their loans.

It’s possible to quibble with each of those changes — and I’ll do just that, in a minute. But it’s impossible to see Litan and Singer compounding all of them, in this manner, without coming to the conclusion that they were systematically trying to come up with the biggest and scariest number they could possibly find. It’s true that whenever they mention their $37 billion figure, it’s generally qualified with a “could be as high as” or similar. But they knew what they were doing, and they did it very well.

When the Chicago Tribune and the LA Times say in their headlines that 401(k) loan defaults have reached $37 billion a year, they’re printing exactly what Custodia and Litan and Singer wanted them to print. The Litan-Singer paper doesn’t exactly say that defaults have reached that figure. But if you put out a press release saying that “the leakage of funds in 401(k) plans due to involuntary loan defaults may be as high as $37 billion per year”, and you don’t explain anywhere in the press release that “leakage of funds” means something significantly different to — and significantly higher than — the total amount defaulted on, then it’s entirely predictable that journalists will misunderstand what you’re saying and apply the $37 billion number to total defaults, even though they shouldn’t.

But let’s backtrack a bit here. First of all, how on earth did Litan and Singer decide that the total amount of 401(k) loans outstanding was $104 billion, when the Department of Labor’s own statistics show the total to be just half that figure? Here’s how.

First, they decide that they need the total number of active participants in defined-contribution pension plans. They could get that number — 72 million — from the Labor Department bulletin: it’s right there in the very first table, A1. But the bulletin isn’t helpful to them, as we’ve seen, so instead they find the same number in a different document from the same source.

That’s as much Labor Department data as Litan and Singer want to use. Next, they go to the Investment Company Institute, which has its own survey, covering some 23 million of those 72 million 401(k) participants. According to that survey, in 2011, 18.5% of active participants had taken out a loan; Litan and Singer extrapolate that figure across the 401(k) universe as a whole.

Finally, Litan and Singer move on to Leakage of Participants’ DC Assets: How Loans, Withdrawals, and Cashouts Are Eroding Retirement Income, a 2011 report from Aon Hewitt which is based on less than 2 million accounts, of the 72 million total. According to the Aon Hewitt report, which doesn’t go into any detail about methodology, when participants took out loans, “the average balance of the outstanding amount was $7,860″. Needless to say, that number was never designed to be multiplied by 72 million, as Litan and Singer do, to generate an estimate for the total number of loans outstanding.

If you want an indication of just how unreliable and unrepresentative the $7,860 number is, you just need to stay on the very same page of the Aon Hewitt report, which says that 27.6% of participants have a loan. If Litan and Singer think that the $7,860 figure is reliable, why not use the 27.6% number as well? If they did that, then the total number of 401(k) loans outstanding would be $7,860 per loan, times 72 million participants, times 27.6% of participants with a loan. Which comes to $156 billion.

But of course we know that there were just $51.7 billion of loans outstanding in 2009; evidently Litan and Singer reckoned that it just wouldn’t pass the smell test if they tried to get away with saying that number might have trebled in a single year. So they confined themselves to merely doubling the number, instead.

Litan and Singer give no reason to mistrust the official $51.7 billion number, except to say that it’s “outdated”. But if it’s outdated, it’s only outdated by one year: it’s based on 2009 data, while the much narrower surveys that Litan and Singer cite are generally based on 2010 data. At one point, they cite the ICI survey to declare that there is “an estimated $4.5 trillion in defined contribution plans”, despite the fact that the much more reliable Labor Department report shows that there was just $3.3 trillion in those plans as of 2009. This, I think, quite neatly puts the lie to the Litan-Singer implication that the problem with the Labor Department numbers is merely that they are out of date, and that when we get numbers for 2010 or 2011, they might well turn out to be in line with the Litan-Singer estimates. There’s simply no way that total DC assets rose from $3.3 trillion to $4.5 trillion in the space of a year or two.

In other words, whatever advantage the ICI and Aon Hewitt surveys have in terms of timeliness, they more than lose in terms of simply being based on a vastly smaller sample base. Litan and Singer adduce no reason whatsoever to believe that the ICI and Aon-Hewitt surveys are in any way representative or particularly accurate, despite the fact that the discrepancies between their figures and the Labor Department figures are prima facie evidence that they’re not representative or particularly accurate. If the ICI and Aon Hewitt surveys were all we had to go on, then I could understand Litan-Singer’s decision to use them. But given that the Labor Department already has the number they’re looking for, it just doesn’t make any sense that they would laboriously try to recreate it using less-reliable figures.

It’s true that the Labor Department’s figures do undercount in one respect: they cover only plans with 100 or more participants — and therefore cover “only” 61 million of the 72 million active participants in DC plans. If Litan and Singer had taken the Labor Department’s numbers and multiplied them by 72/61, or 1.18, that I could understand. But disappearing into a rabbit-warren of private-sector surveys of dubious accuracy, and emerging up with a number which is double the size of the official one? That’s hard to justify. So hard to justify, indeed, that Litan and Singer don’t even attempt to do so.

That, indeed, is the strongest indication that the Litan-Singer paper can’t really be taken seriously. For all their concave borrower utility functions and other such economic legerdemain, they simply assert, rather than argue, that they “believe” it is “more appropriate” to use private-sector surveys rather than hard public-sector data. Such decisions cannot be based on blind faith: there have to be reasons for them. And Litan-Singer never explain what those reasons might be.

Now the move from public-sector to private-sector data merely doubles the total size of the purported problem, while Litan-Singer are much more ambitious than that. So their next move is to bump up the default rate on loans substantially.

There’s no official data on default rates at all, so Litan and Singer, following Smart’s lead, decide to base their sums on a Wharton paper from 2010. Once again, they have to extrapolate from a very small sample: the Wharton researchers had at their disposal a dataset covering 1.5 million plan participants (just 2% of the total). Looking at what happened over a period of three years, from July 2005 to June 2008, the researchers found that the number of terminations, and the number of defaults, remained pretty steady:

defaults.tiff

These are the numbers that Smart used in his paper: roughly 12% of loan holders being terminated each year, and roughly 80% of those defaulting on their loans.

But these are not the numbers that Litan-Singer use. Instead, they notice that the overall default rate, as a percentage of overall loans outstanding, was roughly double the national unemployment rate at the time. And so since the unemployment rate doubled after June 2008, they conclude that the default rate on outstanding 401(k) loans probably doubled as well.

Do they have any evidence that the default rate on 401(k) loans might have doubled after 2008? No. Well, they have a tiny bit of evidence: they look at the small variations in default rates in each of the three years covered in the Wharton study, and see that those variations move roughly in line with the national unemployment rate. Never mind that the default rate fell, from 9.9% to 9.7%, between 2006 and 2008, even as the unemployment rate rose, from 4.8% to 5.0%. They’ve still somehow managed to convince themselves that it’s reasonable to assume that the default rate today is nowhere near the 9.6% seen in the Wharton survey, and in fact is probably closer to — get this — 17.9%.

This doesn’t pass the smell test. The primary determinant of the default rate, in the Wharton study, was the percentage of loan holders who wound up having their employment terminated, for whatever reason. And so what Litan-Singer should be looking at is the increase in the probability that any given employee will end up being terminated in any given year.

Remember that in any given month, or year, the number of people fired is roughly equally to the number of people hired. When the former is a bit larger than the latter for an extended period, then the unemployment rate tends to go up; when it’s smaller, the rate goes down. But the churning in the employment economy is a constant, even when the unemployment rate is very low.

When the unemployment rate rose after 2008, that was a function of the fact that the number of people being fired was a bit higher than normal, while the number of people being hired was a bit lower than normal. But looked at from a distance, neither of them changed that much. In terms of the Wharton study, what we saw happening to the unemployment rate is entirely consistent with the percentage of loan-holders being terminated, per year, staying pretty close to 12%. Of course it’s possible that number rose sharply, but it’s really not possible that number rose as sharply as the unemployment rate did. And so I find it literally incredible that Litan and Singer should decide to use the national unemployment rate as a proxy for the number of people whose employment is terminated each year.

Well, maybe not literally incredible — the fact is there’s one very good reason why they might do that. Which is that they were being paid by Custodia to use any means possible to exaggerate the number of annual 401(k) loan defaults.

Litan and Singer do actually provide a mini smell test of their own: they say that their hypothesized rise in 401(k) loan defaults is more or less in line with the rise in, say, student-loan defaults or in mortgage defaults over the same period. But those statistics aren’t comparable at all, because Litan and Singer are already assuming that the default rate on 401(k) loans, among people who lose their job, was a whopping 80% before the financial crisis. There’s a 100% upper bound here: you can’t have a default rate of more than 100%. Remember that the whole point of this paper is to provide the case that people taking out 401(k) loans should insure themselves against unemployment: any rise in the default rate from people who don’t lose their job (or die, or become disabled) is more or less irrelevant here. And when your starting point is a default rate of 80%, there really is a limit to how much that default rate can rise; it’s certainly going to be difficult to see it rise by more than 85%, even if you allow a simultaneous increase in the number of people being terminated.

All of this massive exaggeration has an impressive effect: if you take $104 billion in loans, and apply a 17.9% default rate, then that comes to a whopping $18.6 billion in 401(k) loan defaults every year. A big number — but still, evidently, not big enough for Litan and Singer. After all, their number is $37 billion: double what we’ve managed to come up with so far. We’ve already doubled the size of the loan base, and almost-doubled the size of the default rate, so how on earth are we going to manage to double the total again?

The answer is that LItan and Singer, at this point, stop measuring defaults altogether, and turn their attention to a much more vaguely-defined term called “leakage”. Once again, they decide to outsource all their methodology to Custodia’s CFO, Kevin Smart. The upshot is that if you borrowed $1,000 from your 401(k) and then defaulted on that loan, the amount of “leakage” from your 401(k) is deemed to be much greater than $1,000. Litan and Singer first add on the 10% early-withdrawal penalty that you get charged for taking money out of your plan before you retire. They also add on the income tax you have to pay on that $1,000, at a total rate of 30%. (They reckon you’ll pay 25% in federal taxes, and another 5% in state taxes.) So now your $1,000 default has become a $1,400 default.

How does that extra $400 count as leakage from the 401(k), rather than just something that gets added to your annual tax bill? Smart explains:

Most participants borrow from their retirement savings because they are illiquid and do not have access to other sources of credit. This clearly demonstrates that participants who default on a participant loan do not have the financial means to pay the taxes and penalty. Unfortunately, their only source of capital is their retirement savings plan so many take the remaining account balance as an additional early distribution to pay the taxes and penalty, further increasing the amount of taxes and penalties due. These taxes and penalties become an additional source of leakage from retirement assets.

Smart’s 16-page paper has no fewer than 24 footnotes, but he fails to provide any source at all for his assertion that “many” people raid their 401(k) plans in order to pay the taxes on the money they’ve already borrowed. In any event, Smart (as well as Litan and Singer, following his lead) makes the utterly unjustifiable assumption that not only many but all 401(k) defaulters end up withdrawing the totality of their penalties and extra taxes from their retirement plan. And then, just for good measure, because that withdrawal also comes with a penalty and taxes, they apply a “gross-up” to that.

By the time all’s said and done, the $1,000 that you lent yourself from your 401(k) plan, and failed to pay back in a timely manner, has become $1,520 in “leakage”. Add in some extra “leakage” for people who default due to death or disability (apparently even dead people raid their 401(k) plans to pay income tax on the money they withdrew), and somehow Litan and Singer contrive to come up with a total of $37 billion.

It’s an unjustifiable piling of the impossible onto the improbable, and the press just lapped it up — not least because it came with the imprimatur of Litan, a genuinely respected economist and researcher. Custodia hired him for precisely that reason: they knew that if his name was on the front page of a report, that would give it automatic credibility. But for exactly the same reason, Litan had a responsibility to be intellectually honest when writing this thing.

Instead, he never even questioned any of the assumptions made by Custodia’s CFO. For instance: if you’re terminated, and you default on your 401(k) loan, what are the chances that the money you received will end up being counted as an “actual distribution” rather than as a “deemed distribution”? Smart and Litan and Singer all implicitly assume that the answer is 100%, but they never spell out their reasoning; my gut feeling is that it’s not nearly as clear-cut as that, and that it all depends on things like when you lost your job, when you defaulted, and who your pension-plan administrator is.

Custodia’s business, and the Litan-Singer paper, are based on the idea that if people who borrowed money from their 401(k) plans had insurance against being terminated from their jobs, then that would have significant societal benefit. In order for the societal benefit to be large, the quantity of annual 401(k) loan defaults due to termination also has to be large. But right now, there’s not a huge amount of evidence that it actually is: in fact, we really have no idea how big it is.

I can say, however, that Custodia has already won this battle where it matters — in the press. “Protecting 401(k) savings from job loss makes a lot of sense,” said Time’s Kadlec in his post — and so long as Custodia can present lawmakers with lots of headlines touting the $37 billion number and supporting their plan, Litan and Singer will have done their job. The truth doesn’t matter: all that matters is the headlines, and the public perception of what the truth is.

Come to think, maybe this makes Litan the absolutely perfect person to run the research department at Bloomberg Government. On the theory that it takes a thief to catch a thief, Bloomberg has hired someone who clearly knows all the tricks when it comes to writing papers which come to a predetermined conclusion. And he also has a deep understanding of the real purpose of most of the white papers floating around DC: it’s not to get closer to the truth, but rather to stamp a superficially plausible institutional imprimatur onto a policy that some lobbyist or pressure group desperately wants enacted. I can only hope that in the wake of using his talents in order to serve Custodia Financial, Litan will now turn around and use them in order to serve rather greater masters. Like, for instance, truth, and transparency, and intellectual honesty.

COMMENT

Wow.

It must have taken a helluva long time to research all that. Either you are paid by the minute and receive bonuses per the written word or you are trying to serve greater masters.+

Litan clearly has contrived data to serve his own goals.

Posted by breezinthru | Report as abusive

Why social mobility is important

Felix Salmon
Jul 30, 2012 21:46 UTC

Tim Harford is a fan of the clear way in which Alex Tabarrok has couched the debate — which started with a Tyler Cowen post back in January — about the desirability of intergenerational economic mobility. Or, in English, is it a good thing if quite a lot of poor people become rich?

The Marginal Revolution guys say that looking at economic mobility is overrated; Cowen, also in January, linked to a bunch of critics of that position, including John Quiggin, Brad DeLong, and Paul Krugman. Recently, DeLong resuscitated the discussion, and Krugman came back for a second go-round as well, all of which resulted in Cowen being rude about Krugman, and Tabarrok trying to clear things up.

Tabarrok’s post is indeed clear, but it’s clear in an invidious way. He basically starts with his conclusion, saying that if a high-mobility society has no better outcome, in general, than low-mobility society, then there’s not very much to choose between them. And similarly, he says, if both a high-mobility society and a low-mobility society have the same very good outcome, then again there’s not much to choose between them.

But this obtusely misses the fundamental reason why high mobility is a good thing: that it improves outcomes. A sclerotic society where no rich people become poor and where no poor people become rich is never going to be a hive of creative destruction. Cowen even comes close to admitting this, when he says that “if the general standard of living is rising, mobility takes care of itself over time” — except he has the causality largely backwards. If you have lots of social mobility, then the general standard of living is going to go up: you’ll have lots of poor people becoming richer, and you’ll also have the rich protecting their downside, in the likely event that they become poorer, by doing their best to improve the lot of the poor.

So when Cowen talks about economic mobility not mattering much “for a given level of income”, or when Tabarrok talks about “some simple societies” with fixed levels of income, they’re taking the variable in the equation and they’re turning it into a constant. What they should be doing is looking at two societies, equal in all respects except that one is high-stasis and the other is high-churn, then fast-forwarding to see which one turns out better. The answer, of course, is the high-churn society — which means, working backwards, that if you want growth, you also want social mobility.

As a result, it’s reasonable to conclude that anything which impedes social mobility — like rising inequality, say — also impedes growth. The effect might not be huge, but it’s there. And the only way not to see it is to effectively assume your conclusions.

COMMENT

Of course I meant “tenets”.

Posted by FifthDecade | Report as abusive
  •