Opinion

Felix Salmon

Why charitable donations to public schools are OK

Felix Salmon
Sep 6, 2013 21:44 UTC

Rob Reich is worried about school inequality. (Longer, more fun version here.) When it comes to education, as in so many other fields, the rich just get richer, leaving everybody else behind. His Exhibit A: the parents of wealthy Hillsborough, California, who between them donate some $2,300 per child per year — all of it fully tax-deductible — to supplement the money coming from the state. This, says Reich, is not what charitable deductions are for:

Wanting to support your own children’s education is understandable, but it also has unintended, pernicious effects… charity like this is not relief for the poor. It is, in fact, the opposite. Private giving to public schools widens the gap between rich and poor. It exacerbates inequalities in financing. It is philanthropy in the service of conferring advantage on the already well-off… Tax policy makes federal and state governments complicit in the deepening of existing inequalities that they are ostensibly responsible for diminishing in the first place.

On the one hand, I should really agree with Reich here. After all, I spent 1,500 words last year railing against the private-school equivalent of these donations: the charitable top-ups that parents get arm-twisted into paying to educate their own kids. But in fact, partly because of that post, I actually quite like what’s happening in California.

For one thing, we’re seeing some real community spirit here: the parents of Hillsborough are supporting the entire school district with their donations, rather than just their own kids’ specific schools. As a result, their donations are improving the lot of every child in their district. True, most of those kids, as Reich says, are “already well-off”. But the quality of education in aggregate is improved, and while inequality might be exacerbated between communities, it isn’t exacerbated within communities, which is what results when the richest members of society decide to take their kids out of the public school system entirely.

The parents of Hillsborough have come to a collective realization: if they club together to make their public schools excellent, then none of them need to think about sending their kids to private school. An annual donation of $2,300 is peanuts compared to even the cheapest private-school tuition — and can result in a better education, too. (I can tell you first-hand that Palo Alto’s public schools are better than a certain expensive private school in London.)

As Richard Reeves says, the job of redistribution rightly belongs to the government, not to individual charitable donors. The state of California should take some of the money currently being spent in Hillsborough, and spend it instead in East Palo Alto, where it’s needed much more urgently. At the margin, private cashflows in Hillsborough should just make such redistribution easier, rather than harder. California, even more than most states, is suffering from enormous budget cutbacks; if there are thousands of parents around the state who are willing and able to augment current education funding, that’s a gift horse which I don’t think needs a huge amount of dental examination.

As Reich himself says, “when it comes to addressing the root cause of inequality in public education in the United States, the solution will have little or nothing to do with philanthropy”. It stands to reason, then, that philanthropy is not the problem here either. I do worry about the rise of what I call “transactional philanthropy” — the quid-pro-quo model where you give money to a philanthropy just because you get something back in return. (A tote bag, your name on a building, a better education for your child.) But I’m also drawn to the deeper idea of what’s going on here.

We all pay for public education, because it’s a public good. Private education, meanwhile, is a private good, which is rightly paid for with after-tax income. California is now developing an intermediate model — what the Hillsborough Schools Foundation likes to call “a groundbreaking public-private partnership to support our schools”. I’d like to think that the the intermediate model, at the margin, might slowly replace the old binary model, and result in wealthy parents being more involved in their local public schools, and less likely to opt out entirely by sending their kids to private school. And that, in turn, would be good for everybody.

COMMENT

>if the parents are conscious enough (and well off enough) to pay the extra $2k then that by definition means that they are involved in other aspects of their children’s lives <

Like feeding their children.

Like buying their children basic school supplies.

The situation is really sad.

Posted by dtc | Report as abusive

Jobs: The summer’s over

Felix Salmon
Sep 6, 2013 13:39 UTC

If you wanted to engineer the strongest possible recovery in the US economy, you would try to create two things. First, and most important, you would want robust jobs growth, with employers adding positions, the unemployed — and especially the long-term unemployed — finding new jobs, and the proportion of Americans with jobs rising steadily. Secondly, you would want to introduce errors into the monthly jobs report. You would try to make jobs growth seem weaker than it really was, and unemployment higher. By doing that, you would keep monetary policy — and market expectations for future monetary policy — as accommodative as possible. That in turn would keep both short-term and long-term rates low, which would provide extra fuel for the recovery.

What we saw this summer was the exact opposite of that scenario. The monthly payrolls reports were positive, which seemed like good news — except we learned today that the jobs gains they reported were overstated. Meanwhile, the Fed started talking explicitly about tightening monetary policy (the so-called taper), which resulted in a massive spike in long-term interest rates: the 10-year Treasury bond hit 3% yesterday. That move was also, partially, fueled by talk of Larry Summers becoming the next Fed chairman rather than the more dovish Janet Yellen.

On top of that, to make things even worse, the Fed started targeting unemployment at exactly the point at which the headline unemployment rate has never conveyed less information. With today’s employment report, I hope we just stop taking it seriously: the small drop, to 7.3%, came entirely for the wrong reasons. This is the chart we should all be looking at instead:

This is, literally, the very picture of a jobless recovery: the recession ended at the end of the last light-blue column, but the participation rate just kept on falling, while the overall employment-to-population ratio stubbornly refuses to rise from its current miserable levels. Both of them are lower than at any point before women had finished their big move into the jobs market, and the Fed must surely take its “full employment” mandate to refer as much to this number as it does to the unemployment figures. (The unemployment statistics in general, and the headline unemployment rate in particular, are misleading mainly because they don’t include discouraged workers who have given up looking for work.)

Today’s jobs report was bad, no two ways about it: no matter how far you reached into the data, there was very little in the way of silver linings. That said, however, the market can look at the data too — with the result that long rates are on their way back down: traders no longer expect tapering to start imminently. On top of that, the most prominent skeptic of quantitative easing, Larry Summers, might not be the lock that we thought he was for Fed chair.

To put it another way: this report is something of an unwind of what we saw this summer. It shows that the reality of the economy was not as good as we thought it was, and that the market probably got ahead of itself in anticipating a taper beginning very soon. We can’t take any solace in the mediocre economy. But if you’re desperate for good news, here it is: at least we know, now, how mediocre the recovery is, especially on the jobs front. And we’re going to stop hobbling ourselves by pushing long-term interest rates inexorably upwards, thereby making that recovery even harder.

COMMENT

Foppe –

No, BLS revisions tend to be pretty noisy, and they go upward as well as downward. Stats here:

http://www.bls.gov/web/empsit/cesnaicsre v.htm

Posted by strawman | Report as abusive

Why mortgage rates are weird

Felix Salmon
Sep 5, 2013 21:26 UTC

This time last year, Peter Eavis came out with a pair of columns asking the question: why were mortgage rates so high? Back then, the typical 30-year mortgage cost 3.55% — more than 140bp above prevailing mortgage-bond rates. Given that banks normally lend out at only about 75bp above mortgage-bond rates, said Eavis, mortgage rates should by right have been much lower.

Eavis was pessimistic that market competition would drive rates down: “mortgage rates may not decline substantially from here,” he said, adding that “the 2.8 percent mortgage may never materialize”. Little did he know that he was writing at the very low point for mortgage rates — they have spiked over the course of the past year, and today, reports Nick Timiraos, they’re at 4.73%.

Using Eavis’s benchmark, we’re still pretty much in the same place as we were a year ago: that 4.73% rate is about 130bp more than the yield on Fannie Mae’s current-coupon mortgage bonds. It could be lower, it should be lower, but at least the spread isn’t continuing to widen.*

But in one way, today’s rate is even crazier than the 3.55% rate a year ago. As Timiraos says, this marks the first time ever that the typical 30-year mortgage rate, which comes with the blessing of a government guarantee, is higher than the typical rate on unguaranteed “jumbo” loans. Even in a world of crazy spreads, this is pretty bonkers:

Before the housing bubble burst six years ago, jumbo mortgages over the past two decades typically had rates at least 0.25 percentage point above conforming loans, but that widened sharply after 2007, reaching a peak of 1.8 percentage points in 2008, according to HSH.com, a financial publisher. The rate difference between the two stood at 0.5 percentage point as recently as last November.

Timiraos tries to explain this rationally: if banks hold jumbo loans on their balance sheets, he says, that means the price of the loan isn’t “set by bond markets”. To which I say, pull the other one. Of course the price is set by bond markets. Or, says Timiraos, maybe this whole thing can be explained by looking at the cross-selling opportunities represented by jumbo borrowers: if they come for the mortgage, they’ll stay for the investment advice! Or something. Again, this is highly unpersuasive, especially since there’s no indication that cross-selling abilities have suddenly blossomed in the past year or so.

The real story here, I think, is not about banks at all; rather, it’s about the government, which is desperately trying to extricate itself from its current position as the ultimate source of just about all mortgage finance. In the absence of a formal plan of how to do this, it’s trying a market-based approach: make mortgages expensive, and maybe the banks will take the hint and start offering private-label mortgages at lower, more competitive rates. As Timiraos says, part of the reason mortgage rates are so high is the fees that Fannie and Freddie are slapping on to every loan they buy from lenders.

In a way, that’s what we’re seeing in the jumbo market — when you don’t have to deal with Frannie, rates are lower than when you do. But if the government hopes that expensive mortgages will cause the private sector to stop dealing with Frannie, it should prepare itself for disappointment. As I said in March, whatever Frannie pays for mortgages will simply become the market price for mortgages. The government is just too big not to be the marginal price setter: it speaks volumes that mortgage rates are lower than the government is paying only in the one area where the government isn’t competing.

All of which is to say that if the government wants to shrink Frannie, it’s going to have to do more than tack on a bunch of fees and leave the rest to the market. Five years after the crisis, we’re still waiting for a plan, however, and there’s no indication that we’re going to get one any time soon. Which means that the government is going to remain the dominant monopoly in housing finance for the foreseeable future.

*Update: I’ve now managed to pull the chart: this is the 30-year mortgage rate, minus the yield on the 30-year current-coupon bond from Fannie Mae. As you can see, it has actually tightened in back towards its historical level over the past year: the excess profits that Eavis was complaining about a year ago seem to have largely disappeared.

COMMENT

Matt Levine, in his new job at Bloomberg View, has a good take on this – http://www.bloomberg.com/news/2013-09-06  /cheaper-jumbo-loans-just-aren-t-that-w eird.html

Posted by realist50 | Report as abusive

When vultures land in the Hamptons

Felix Salmon
Sep 5, 2013 18:27 UTC

Today’s tale of hedge fund / Hamptons excess comes from Mitchell Freedman at Newsday; if that story is paywalled, you can find pickups in all the usual places. But it’s the Daily Mail which has the best map:

For most readers, this story is just another glimpse into the hedge-fund lifestyle, where one man will spend $120,000 for a one-foot-wide strip of land — not so that he can get beach access (he already has that) but rather to ensure that his next-door neighbor loses his beach access. As the Daily Mail puts it, Kyle Cruz, who owns the house behind Marc Helie, is now “hemmed in”, and can no longer reach the beach directly from his home. Here’s Freedman:

The auction “caused quite a stir,” Thompson said.

Based on reports from staffers who ran the auction, he said, “I gathered one guy really did not want the other one walking over his property to the water.”

Helie’s purchase effectively gives him narrow slivers of property on both the east and west sides of Cruz, who would have to walk on Helie’s property to reach the ocean beach a few hundred feet away.

To a small set of sovereign-debt geeks with long memories, however, it’s not the beach-access politics which jumps out from this story — it’s the name Marc Helie. For back in 1999, Helie was the man who loved to take credit for forcing what was in many ways the first ever sovereign bond default. And Helie’s actions 14 years ago are actually rather similar to what he’s doing now, in the Hamptons.

Sovereign bond defaults are relatively commonplace these days — even Greece got in on the game. But back in 1999, there was something special about sovereign foreign-law bonds (as opposed to loans): they always managed to avoid being restructured when a country defaulted on its debt. Even Russia, in its catastrophic 1998 sovereign default, always remained current on its Eurobonds.

When Ecuador got into fiscal trouble in 1999, then, its first instinct was not to default on its bonds — even though the IMF was rumored to be pushing it to do exactly that. The bonds in questions were Brady bonds — restructured loans — which included various guarantees, in the form of built-in Treasury bond collateral, which could be used to make payments if and when Ecuador got into trouble. So Ecuador proposed that it would pay the coupons on the bonds without collateral in full, and it would dip into collateral to make payments on its other bonds, while trying to work out a longer-term solution.

But Ecuador’s tactics were atrocious: the country’s announcement came right in the middle of the IMF annual meetings, the one time of the year when all the world’s emerging-market bond investors converge on the same city at the same time. Those investors were not happy, and it wasn’t long before a vocal group of them, led most visibly by Helie, started agitating for highly aggressive action against the country.

Normally, of course, bondholders don’t want borrowers to default — and Ecuador was hoping that this case would be no different. But rather than accept Ecuador’s deal, which treated different classes of bonds differently, and which was very vague, Helie and other bondholders decided that they would rather force the matter. They discovered that if they could organize 25% of the holders of the affected bonds, and get them to write a very specific letter to the bonds’ fiscal agent in New York, they could accelerate those bonds. Rather than just owing a single coupon payment, Ecuador would then owe the entire principal amount, plus all future coupon payments, immediately.

No one expected that Ecuador could pay such a sum, but Helie and the other bondholders just wanted to make things simple. If Ecuador was going to effectively default on certain bondholders, then they would make it official, and force the country into a full-scale bond restructuring, the likes of which the world had never seen. Brady bonds were specifically designed to be very difficult to restructure: any change in the payment terms needed the unanimous consent of bondholders, and there were so many bondholders that unanimous consent was always going to be impossible to find.

Finding 25% of bondholders, however, to block what Ecuador wanted to do, was much easier — and that’s exactly what Helie did. Essentially, Ecuador expected that it could just walk down to the beach and do its bond exchange relatively easily. Instead, it found that hedge fund managers like Helie bought up property which would prevent it from doing that. When Helie et al accelerated Ecuador’s bonds, they forced it to enter into a far more elaborate and convoluted restructuring, in much the same way that Kyle Cruz now needs to walk much further to get to the beach.

Helie worked very hard and spent a lot of money on making life as difficult for Ecuador as he possibly could — in violation of the general assumption that bondholders tend to want what’s best for any given debtor nation. His plan worked, too: the exchange that Ecuador eventually unveiled was much more generous than the market expected, and Helie made a lot of money on his bonds. He was also lionized on the front cover of Institutional Investor magazine, under the headline “The Man Who Broke Ecuador”. It was all very welcome publicity for a man who was punching well above his weight: his hedge fund managed only about $10 million, and behind the scenes other, much more established (and much more publicity-shy) hedge funds had done most of the hard work of organizing the acceleration.

Helie was flying high, enjoying all the stories about the young hedge-fund manager with a ponytail and an office above a modeling agency, who was shaking up the world of sovereign debt. But while his hedge fund, Gramercy Advisors, went on to much bigger things, moving out of the small offices in Manhattan and into much larger digs in Connecticut, Helie didn’t last long. He was spending too much time at his beach house, and eventually his partners decided that he wasn’t doing enough work, and effectively kicked him out of the business.

Evidently, however, old habits die hard; Helie was so adamant that he didn’t want Cruz walking past his house to the beach that he spent $120,000 to make Cruz’s life as difficult as possible. It might even be enough to make a hardened hedge-fund manager start to feel a bit of sympathy for the government of Ecuador.

COMMENT

When psychopaths and sociopaths find a job, a vulture fund is perfect. No regrets, no empathy, no thought of the poor or others they might hurt… just greed and more money.

The choice is caring about your neighbour or caring about who has access to the beach you already feel is also your property … just who is surprised which path was taken. He will not be happy until “his” beach is privatized and he can fence it all in.

Posted by youniquelikeme | Report as abusive

Why the internet is perfect for price discrimination

Felix Salmon
Sep 3, 2013 22:22 UTC

Price discrimination is one of those concepts that only an economist could love. But the theory is clear: the more that a vendor can discriminate according to willingness to pay, the more value that vendor can add. Rory Sutherland uses air travel as an example: having a mix of classes allows price-sensitive people to pay low fares, while the rich have a large number of flights to choose from. On top of that, he could have added, airlines are extremely good at exercising price discrimination within classes, so that two people receiving identical service might be thousands of dollars apart in the amount they paid for their tickets.

Airlines have always used a multitude of proxies to determine customers’ willingness to pay: how far in advance people are booking, whether they’re going one-way or return, whether they’re staying a Saturday night, and so on and so forth. But nowadays, online, the amount of information that companies have about their customers has never been higher. And the obvious way to monetize that information is through price discrimination: charge the people with high willingness to pay more money than those who will only buy if the price is low. Adam Ozimek explains:

The more information we have, the more profitable first degree price discrimination will be. As big data and online buying increases the information that business have on us, the ease and profitability of first degree price discrimination will become difficult to resist.

Ozimek says that this kind of price discrimination will be “creepy, invasive, and unfair” — but it will at the same time result in superior products. He doesn’t mention this, but the obvious place for this kind of price discrimination is newspaper paywalls. The FT already does it: there’s no real list price for an FT subscription, and the paper basically just charges whatever it thinks it can get away with, given what it knows about you.

For newspapers with more price-sensitive readers, smart price discrimination is even more important. Ideally, you’d charge every reader just a little bit less than they were willing to pay — and you’d give your content away to the people who were willing to pay nothing. And here’s the thing: newspapers know a lot about their readers — and especially the regular readers who come back often enough to hit paywalls. To take a simple example, they know if those readers are looking at local sports reports, or whether they’re looking at general entertainment news. The former group will be much more willing to pay than the latter. But they’re also quite likely to know a lot more about you than that, including — if you’re someone who’s ever had a print subscription — exactly where you live.

The NYT will shortly roll out a new, lower-priced product, giving some subset of its news to people who don’t want to pay for the whole thing. But that’s just going further in the wrong direction. Already the pricing is sending all manner of bad messages: access to nytimes.com plus a phone app is $15 every four weeks, access to the website plus the tablet app is $20, and access to the website plus both phone and tablet apps is $35. Which logically means that access to the website itself is worthless. (If A+B=15, and A+C=20, and A+B+C=35, then A=0.)

The NYT is in the process of building new products, for which it can then charge varying amounts of money. This is a lot of work, and has reportedly created tensions between the CEO and the editor when some of the new products report up to the former rather than the latter, even when they’re being built by journalists. Instead of thinking in terms of creating a wide range of different products, then, maybe the NYT should just try to do the best journalism it can, and then sell that journalism at a wide range of different prices.

That would be a very tough decision to make: consumers, as a rule, viscerally dislike price discrimination. If you know that your friends and neighbors are getting exactly the same product that you are, but are paying a different price for it, then someone is going to feel ripped off. Still, at the margin, the NYT can start to implement something like this without having to charge different prices. For instance, it can hold off on the paywall for certain readers — the ones with the lowest willingness to pay — while putting it up quite aggressively for others, such as perhaps the ones who spend a lot of time on the business pages. And if you price discriminate by giving away discount codes, few people object at all.

For companies which aren’t as high-profile as the NYT, price discrimination is a no-brainer. Amazon was doing it as long ago as 2000, and Uber does it every day, by charging extremely high headline fees, and then giving away various discount coupons like confetti, to carefully-targeted audiences.

The internet is a zone where companies sell products with zero marginal cost, and with a lot of information about exactly who their audiences are. In that world, it would be weird if they didn’t try to charge different prices to different customers. We’re used to the freemium model, which is very basic price discrimination. In future, expect that model to become a lot more sophisticated.

COMMENT

You forget to mention that McKinsey is advising NYT on the new products. So it is McKinsey which is valuing NYT at zero and access to different devices as valuable.

Posted by 2paisay | Report as abusive

Chart of the day, Microsoft edition

Felix Salmon
Sep 3, 2013 15:34 UTC

Many thanks to Ben Walsh for pulling together the data for this chart. The numbers speak for themselves, really: over the course of Steve Ballmer’s tenure as Microsoft CEO, the company’s stock price has gone nowhere, its market share has plunged — but its headcount has more than trebled. And that’s before adding another 32,000 employees as part of the Nokia acquisition.

Ben Thompson has a very smart analysis of Microsoft’s move here:

Guy English has already characterized Ballmer’s disastrous reorganization as a straitjacket for the next CEO; adding on a mobile phone business that Microsoft probably should abandon is like attaching an anchor to said straitjacket and tossing the patient into the ocean. It will be that much more difficult for the next CEO to look at Windows Phone rationally.

As Henry Blodget notes, Windows Phone is now going to account for a good quarter of Microsoft’s employees; integrating those two huge and very different cultures is going to take an enormous amount of effort, with no guarantee of success. And as Thompson notes, this acquisition essentially forces Microsoft to double down on its strategy (which has signally failed to date) of competing head-to-head with Android and iOS.

There is really zero consumer demand for an alternative smartphone OS: even the ultrageeks fell well short of raising the $32 million they needed to develop a version of Ubuntu for phones. Microsoft is pretty good at giving big organizations what they want — Windows and Office, the two great powerhouses which have between them accounted for all of Microsoft’s profits over the years. And somewhere, deep inside its institutional memory, it knows that once upon a time it came late to the browser game, entered with a big splash, and ended up demolishing Netscape.

The problem is that this second-mover strategy doesn’t work against Google and Apple. It doesn’t work in search, it doesn’t work in tablets, it doesn’t work in phones. (It has arguably worked in gaming systems, which is something of a Pyrrhic victory, given the way in which games are going mobile.) Nokia is a failing company — if Microsoft hadn’t swept in to save it, it would probably have gone bust pretty quickly — and one of the reasons that it’s failing is that no one wants to buy a Windows phone. And that’s especially true in the fastest-growing market of all.

Nokia’s fall has been most spectacular in Asia, a region that its phones once dominated. As recently as 2010, the company had a 64 percent share of the smartphone market in China, according to Canalys, a research firm. By the first half of this year, that had plunged to 1 percent.

With this acquisition, Nokia chief Stephen Elop becomes heir apparent to Ballmer. Elop knows how to navigate Microsoft’s poisonous bureaucracy, having worked there for many years, but he also counts as an outsider, able to bring in fresh ideas. He also — obviously — knows mobile, which is the single factor determining Microsoft’s future: if the company can navigate the move from the desktop to mobile, it will succeed; if it can’t, it will fail.

But the chart foretells how this game is going to play out: Microsoft is now simply too big to turn around. Elop saved Nokia in much the same way as John Thain saved Merrill Lynch, by selling a fundamentally worthless company to a much larger strategic buyer for billions of dollars. That strategy isn’t going to work for Microsoft. Probably, there is no strategy which would work out for Microsoft. The company’s heyday is far in the past, now; all that the new CEO can hope for is to maximize profits as it slowly, inexorably, declines.

COMMENT

“over the course of Steve Ballmer’s tenure as Microsoft CEO… its market share has plunged.” Oh yes, its market share for operating systems worldwide is now down to a dismal 91% for Windows. Pathetic. For Office type products, it’s even higher. MSFT will have about $2 billion in cash flow this month, just like it does every month. Mr. Ballmer has been an unqualified disaster.

Posted by Ditman | Report as abusive

Don’t cry for “the little guy on Wall Street”

Felix Salmon
Sep 2, 2013 23:08 UTC

This happens every time something goes wrong on the stock market — every time there’s a flash crash, or a high-frequency trading firm blows up, or the Nasdaq is forced to go dark for three hours. A bunch of editors who don’t really know anything about HFT ask for stories about it, and they all want the same thing: a tale of how a small group of high-speed trading shops, armed with state-of-the-art computers, are using their artificial information advantage, and their lightning-fast speed, to extract enormous rents from the little guy.

The result is a spate of stories like Rob Curran’s latest piece for Fortune, which appears under the headline “Make $377,000 trading Apple in one day”. Of course, there are lots of ways to do that: one way would be to buy about 77,000 shares of Apple, for $37.7 million, and then watch them rise by 1%. But Curran reckons he’s found a better way — indeed, an easy profit which involves no risk at all. What’s more, this method is particularly evil, since apparently all of the profits that it generates are coming straight out of your pocket.

Curran’s story is based in large part on a “study” by Berkeley professor Terrence Hendershott. This study is never named, or quoted, or linked to, and I can’t find it on Hendershott’s web page, so I’m not going to blame Hendershott for any of the content of Curran’s article. Specifically, for instance, Curran’s sub-hed says that “A Berkeley professor finds out just how much a certain type of high frequency trading costs the average investor”. I suspect that Hendershott’s study actually purports to do no such thing*, and that “average investors” aren’t even mentioned in it. I say this because Hendershott is a smart guy, and I can’t believe that this kind of thing fairly summarizes any of his work:

It’s well known that some high-frequency computer geeks at firms like Getco LLC take advantage of latency, just as it’s well known that some Blackjack-playing computer geeks count cards in Las Vegas casinos. But it’s never been clear how much this type of trading costs the little guy on Wall Street.

Terrence Hendershott, a professor at the Haas business school at the University of California at Berkeley, wanted to find out. He was recently given access to high-speed trading technology by tech firm Redline Trading Solutions. His test exposes the power of latency arbitrage the way Ben Mezrich’s Bringing Down the House exposed the power of card counting.

According to his study, in one day (May 9), playing one stock (Apple), Hendershott walked away with almost $377,000 in theoretical profits by picking off quotes on various exchanges that were fractions of a second out of date. Extrapolate that number to reflect the thousands of stocks trading electronically in the U.S., and it’s clear that high-frequency traders are making billions of dollars a year on a simple quirk in the electronic stock market.

One way or another, that money is coming out of your retirement account. Think of it like the old movie The Sting. High-speed traders already know who has won the horse race when your mutual fund manager lays his bet. You’re guaranteed to come out a loser. You’re losing in small increments, but every mickle makes a muckle — especially in a tough market.

This is deeply confused. For one thing, there’s much more to HFT than simple latency arbitrage of the kind that Curran is describing here. And in any case, this kind of strategy just doesn’t work. To see why, just do the extrapolation Curran’s asking you to do. If high-frequency traders are making $377,000 per stock per day, then that would add up — multiply by 5,000 stocks, and by 250 days per year — to total profits of almost $500 billion per year, or about 3% of America’s GDP. And that doesn’t even include the extra profits made by high-frequency trading in other asset classes, like foreign stocks, or currencies, or interest rates.

Curran’s number, in other words, doesn’t pass the smell test.

Note that Hendershott’s one-day profit was “theoretical” — Curran never asks the question of whether Redline in practice makes anything like that of money, or if they don’t, why they don’t.

In the real world, it should probably go without saying, hundreds of billions of dollars in annual risk-free profits aren’t just sitting on trees, waiting to be plucked. The idea behind latency arbitrage is simple: you’re essentially trying to buy or sell at yesterday’s prices, in the knowledge of where the price is today. (Except, we’re talking about a time lag measured in milliseconds, rather than days.) If you were to actually enter the market with a simple latency-arbitrage algorithm like this one, however, you would almost certainly lose your shirt in no time: a thousand other algobots would immediately recognize your pattern, and pick you off systematically.

But Curran seems to be convinced that Hendershott’s theoretical profits correlate to actual profits in reality: latency arbitrage alone, he says, is worth billions of dollars to high-frequency traders. What’s more, he says, those billions of dollars are “coming out of your retirement account”.

I have to say I’m weirdly impressed by Curran’s sophisticated argument for why these theoretical profits must be costing small investors billions of dollars a year: “every mickle”, we’re told, “makes a muckle”. This argument has the advantage of being unfalsifiable — but, sadly, it’s also complete nonsense. (I especially love the idea that mickles are more likely to become muckles “in a tough market”, whatever that’s supposed to mean.)

The fact is that “the little guy” has never had better execution than he has right now. To oversimplify wildly, let’s divide Wall Street into two groups: the sell side, the price-makers who provide liquidity, and the buy side, the price-takers, who simply decide whether to accept the market’s offer or not. If you’re looking at the current bid-offer spread on a stock (also known as NBBO, for national best bid/offer), then the bid is the best current price at which a sell-side firm will buy the stock from you, while the offer is the price you’ll have to pay to buy it. The difference between the two prices, these days, is lower than it has ever been, and small investors can normally buy or sell as much of any given stock as they like, right at NBBO, with execution in a fraction of a second. That wasn’t the case ten years ago.

Looked at through Curran’s eyes, the “little guy” is always a price taker. He doesn’t go out there into the market posting offers and waiting to see whether anybody will hit them; he just looks to see what offers there are, and if he likes the price being offered, he takes it. That kind of investor — and there are a lot of them out there — has never had it so good, precisely because there are so many HFT shops these days, competing to provide liquidity to the buy side and to receive the small sums of money that exchanges pay to the price-makers rather than the price-takers.

High frequency trading, along with its close relative decimalization, has been fantastic for price takers. They get better prices, they get them faster than ever, and the transaction costs associated with a “round trip” — buying a position and then selling it again — have never been lower. There’s some debate about whether it’s easier or harder than it used to be to trade in size; the jury’s still out on that one, but technology like dark pools has helped there, too. And if you’re big, then there are no shortage of VWAP algorithms and the like which you can use to try to beat the HFT bots at their own game.

Curran disagrees, and cites another paper — this one by Michael Wellman and Elaine Wah University of Michigan. (He didn’t link to this one, either, but a bit of googling found it here.) “Like others before them,” writes Curran, “Wellman and Wah’s study found latency arbitrage was eating investor profits.”

In fact, the Wellman-Wah paper finds no such thing: it’s not an empirical paper at all, and makes no attempt whatsoever to quantify investor profits, be they real or foregone. Instead, it’s an entirely theoretical thought experiment, where an “infinitely fast arbitrageur profits from market fragmentation” at the expense of “zero-intelligence trading agents”.

It’s easy to agree with Wellman and Wah that if there were a lot of risk-free latency arbitrage going on, then the victims would be “zero-intelligence trading agents”, or, as Larry Summers likes to call them, noise traders. But there’s a lot more to HFT than “rent space in a co-located server rack, find risk-free latency arbitrage opportunities, profit!” And while there are, still, idiots (look around), there are fewer of them than there used to be during the go-go day-trading days of the late 1990s. They learned their lesson during the dot-com bust, and with the rise of HFT there are very few small investors left who really believe they’re competing on a level playing field.

Note that it’s traders who lose money when HFT bots make money; if you’re a buy-and-hold investor, you really don’t care what’s going on behind the scenes at all. You just want the best execution for your orders — and right now, in general, execution for small investors is excellent. If you’re day-trading leveraged ETFs, on the other hand, then you’re basically just gambling: intraday moves are essentially random. Those people, over time, will end up losing money to the high-frequency traders.

Still, Wellman and Wah — and Curran — are concerned enough about the plight of the zero-intelligence trading agents that they propose a solution to this problem:

The authors suggest that the perpetual motion tape be replaced by a stop-motion tape. Instead of a continuous, free-for-all market, the session would take the form of a series of lightning-fast-auctions at intervals of a few milliseconds. This would give exchanges a reasonable amount of time to disseminate information (most only take a few thousandths of a second to catch up on the “direct access” feeds). It would also give traders a reasonable amount of time to place bids and offers on a given stock. The average investor would not see the difference because prices on active stocks would still be changing many times per second.

I’ve proposed something similar myself — last year, I said that a stock market where there was a mini-auction for every stock once per second would cause no measurable harm to investors, and would make the stock market as a whole less brittle. I’m no fan of HFT, and a discontinuous market would indeed put a stop to most of its excesses.

But what kind of a world do we live in when someone like Curran can claim with a straight face that “a few milliseconds” is “a reasonable amount of time to place bids and offers on a given stock”? Or where being able to see a stock price “changing many times per second” is considered an important feature of any stock market? The answer actually tells you a lot about the real financial victims of HFT. If you’re just a bystander, looking at stock prices changing many times per second, then you are not losing money to the algobots; in fact, you probably benefit from them, when you make a trade. If, on the other hand, you are not a high-frequency trader but you are the kind of person who thinks that “a few milliseconds” is “a reasonable amount of time to place bids and offers on a given stock”, well, then in that case you might indeed be a victim of the HFT crew: you’re trying to compete with them, and you’re probably losing.

The point here is that making improbable claims about the costs of HFT to small investors is not going to get you very far. The real costs of HFT are found in fat tails and systemic risks and the problems that are endemic to ultra-complex systems. It would certainly be rhetorically very neat and easy if we could plausibly declare that small investors are being hurt by high-frequency traders. But the truth is that they’re not: they’re actually being helped by HFT. It’s the market as a whole which is being put at risk by these algorithms, not the “little guy”. And while I’d welcome a move to a discontinuous market, I don’t for a minute think that such a move would save small investors any money at all, let alone billions of dollars.

*Update: Eric Hunsader has found a cached version of the paper, and — as I suspected — it doesn’t say anything like what Curran said it says. Hendershott did not apply any kind of trading strategy to Apple’s price history on May 9, and did not come up with $377,000 in theoretical profits by doing so. Here’s the relevant bit of the paper, which tries to recreate a “synthetic” NBBO and then compare it to the official SIP NBBO:

For 3.51 milliseconds of each second the SIP NBBO and synthetic NBBO differ. This could result in a buy or sell market order going to the wrong market roughly half that often: 0.175% of the time. Figure 5 shows that the average price dislocation is $0.034. Simply multiplying this times the percentage of the time a dislocation occurs yields an expected price dislocation of $0.006 per 100 shares for a market order entered randomly throughout the day. Multiplying this dollar amount by Apple’s May 9 trading volume of 17,167,989 shares yields $942, representing 0.001 of a basis point of dollar volume traded. This suggests that investors randomly routing market orders are unlikely to face meaningful costs due to data latency.

Yep, the “cost to the little guy” is not $377,000 per day; in fact, according to the paper, it’s just $0.006 per 100 shares, or one thousandth of a basis point. Which adds up to a whopping $942 per day. None of which can be captured by latency arbitrage. Hendershott’s conclusion is not that the little guy is losing out on billions: it’s that “investors randomly routing market orders are unlikely to face meaningful costs due to data latency”.

So where does the $377,000 figure come from? It comes from hypothetical latency traders trying to pick off other (equally hypothetical) active traders who do things like place orders in dark pools at the NBBO midpoint:

Assume BATS updates AAPL bid price from $530 to $531, and the ask price remains at $532. This changes the mid-price from $531 to $531.5. In the first 1.5 milliseconds, slower traders are not aware of the price change. If some such regular traders have placed an order to trade at mid-price in a dark pool, then a faster trader can buy the stock at $531 in dark pool when the synthetic NBBO gets updated. After 1.5 milliseconds, the trader can sell it for $531.5 in the dark pool. In this case the trade gains 50% of the price dislocation. Dark pools represent roughly 11% of trading volume, corresponding to 1,888,478 share of AAPL on May 9. If half of the average dislocation of 0.034 cents is captured on this volume then the fast trader would make a profit of $376,900 in a single stock on a single day. While Apple is one of the highest-volume stocks and this almost certainly represents an upper bound on the profits of strategies based on latency, the dollar figure illustrates the possible magnitude of profits and costs stemming from latency for traders continuously in the market.

Or, in English: if there are people trading continuously in the market who don’t have low-latency feeds, and those people are using the NBBO to determine their trading strategy, then those people can get picked off by hypothetical HFT bots. But clearly, those people are not “the little guy on Wall Street”, and no one in reality is making anything like $377,000 a day from HFT.

In fact, even the tiny $942 figure doesn’t represent HFT profits, it just represents potential losses for small investors. Curran has completely misrepresented Hendershott’s paper. His entire story, including the headline, is basically just false.

Update 2: The paper itself is still hosted on Henderson’s website here, he just doesn’t link to it from his list of publications.

COMMENT

Thanks for a very informative article. I was under the impression that the main problem with HFT was that it had wiped out traditional market-makers, so no one is on the hook to provide meaningful quotes during times of market disruption. So all liquidity can suddenly dry up and cause a “flash crash”.

I guess that idea is inherent in the following quote?
“The real costs of HFT are found in fat tails and systemic risks and the problems that are endemic to ultra-complex systems.”

I’m trying to get my head around this topic. Could one say that this is a failure of regulators, who allow HFTs to make profits as market makers, but don’t force them to take on the associated obligations?

Posted by ADZimm | Report as abusive
  •