When large-scale complex IT systems break

By Felix Salmon
August 1, 2012
rogue algo day in the markets today, which sounds rather as though the plot to The Fear Index has just become real, especially since the firm at the center of it all is called The Dark Knight, or something like that.

" data-share-img="" data-share="twitter,facebook,linkedin,reddit,google" data-share-count="true">

It’s rogue algo day in the markets today, which sounds rather as though the plot to The Fear Index has just become real, especially since the firm at the center of it all is called The Dark Knight, or something like that. At heart, however, is something entirely unsurprising: weird things happen when you get deep into the weeds of high-frequency trading, a highly-complex system which breaks in entirely unpredictable ways.

In fact, it’s weirder than that: HFT doesn’t just break in unpredictable ways, but works in unpredictable ways, too. Barry Ritholtz has an excerpt from Frank Partnoy’s new book, Wait, all about an HFT shop in California called UNX:

By the end of 2007, UNX was at the top of the list. The Plexus Group rankings of the leading trading firms hadn’t even mentioned UNX a year earlier. Now UNX was at the top, in nearly every relevant category…

Harrison understood that geography was causing delay: even at the speed of light, it was taking UNX’s orders a relatively long time to move across the country.

He studied UNX’s transaction speeds and noticed that it took about sixty-five milliseconds from when trades entered UNX’s computers until they were completed in New York. About half of that time was coast-to-coast travel. Closer meant faster. And faster meant better. So Harrison packed up UNX’s computers, shipped them to New York, and then turned them back on.

This is where the story gets, as Harrison put it, weird. He explains: “When we got everything set up in New York, the trades were faster, just as we expected. We saved thirty-five milliseconds by moving everything east. All of that went exactly as we planned.”

“But all of a sudden, our trading costs were higher. We were paying more to buy shares, and we were receiving less when we sold. The trading speeds were faster, but the execution was inferior. It was one of the strangest things I’d ever seen. We spent a huge amount of time confirming the results, testing and testing, but they held across the board. No matter what we tried, faster was worse.”

“Finally, we gave up and decided to slow down our computers a little bit, just to see what would happen. We delayed their operation. And when we went back up to sixty-five milliseconds of trade time, we went back to the top of the charts. It was really bizarre.”

Partnoy has a theory about what’s going on here — something about “optimizing delay”. But that sounds to me more like ex-post rationalization than anything which makes much intuitive sense. The fact is that a lot of the stock-trading world, at this point, especially when it comes to high-frequency algobots, operates on a level which is simply beyond intuition. Pattern-detecting algos detect patterns that the human mind can’t see, and they learn from them, and they trade on them, and some of them work, and some of them don’t, and no one really has a clue why. What’s more, as we saw today, the degree of control that humans have over these algos is much more tenuous than the HFT shops would have you believe. Knight is as good as it gets, in the HFT space: if they can blow up this badly, anybody can.

I frankly find it very hard to believe that all this trading is creating real value, as opposed to simply creating ever-larger tail risk. Bid-offer spreads are low, and there’s a lot of liquidity available on a day-to-day basis, but it’s very hard to put a dollar value on that liquidity. Let’s say we implemented a financial-transactions tax, or moved to a stock market where there was a mini-auction for every stock once per second: I doubt that would cause measurable harm to investors (as opposed to traders). And it would surely make the stock market as a whole less brittle.

It’s worth recalling what Dave Cliff and Linda Northrop wrote last year:

The concerns expressed here about modern computer-based trading in the global financial markets are really just a detailed instance of a more general story: it seems likely, or at least plausible, that major advanced economies are becoming increasingly reliant on large-scale complex IT systems (LSCITS): the complexity of these LSCITS is increasing rapidly; their socio-economic criticality is also increasing rapidly; our ability to manage them, and to predict their failures before it is too late, may not be keeping up. That is, we may be becoming critically dependent on LSCITS that we simply do not understand and hence are simply not capable of managing.

Today’s actions, I think, demonstrate that we’ve already reached that point. The question is whether we have any desire to do anything about it. And for the time being, the answer seems to be that no, we don’t.

17 comments

Comments are closed.