Algorithmic trading and market-structure tail risks
I’m on Fresh Air today, talking about the Wired article on algorithmic trading which I wrote with Jon Stokes. Norm Cimon, who wrote a very interesting paper on the limits of financial databases, read it, and emailed with an important point:
The marketplace is, in mathematical parlance, a non-linear iterated discrete dynamical system. Given the introduction of all that computing power, however, it’s one that can now crash and burn (really a transition to a different portion of the state space) in an instant.
There’s a real lack of understanding about such systems. They are not predictable because they require perfect knowledge of the state at any given time and that is, of course, not possible. Trajectories for systems like this can transition instantaneously, not because there’s something wrong with them but because they’ve been wired up to operate that way. There’s been very little discussion which isn’t all that surprising. I don’t think there’s much contact between those trained in dynamical systems theory and Wall Street. That’s too bad. The quants would have been wise to spend a little less time on probability and statistics, and a little more on nonlinear dynamics.
On second thought, it wouldn’t have done them much good, since it’s really a qualitative as opposed to a quantitative science. It will give you a great idea of what might potentially happen under various system forcings, and no predictive power to say when.
The University of Pennsylvania’s Michael Kearns is briefly quoted in the piece, but I actually talked to him for quite some time about these issues. He’s a real expert on these matters, who’s done a lot of work on Wall Street, and he’s very worried about issues of market structure. This is probably as good a place as any to put a few chunks of the interview which fell onto the cutting-room floor, as it were.
Here’s some of what Kearns told me. It’s all pretty much verbatim from our conversation, and more or less self-explanatory. If you like geeking out about market-structure tail risks, there’s lots of interesting stuff here.
There’s a growing understanding and belief on Wall St, especially intraday, that there’s a strong game-theoretic aspect to the market: the performance of a given strategy depends on what other strategies are trading in the market at the same time. My payoff is a function of what I do and what the other players in the game do.
The concepts of strategic interaction are important. Game theory is hard to understand even in simple cases. And now strategies are adaptive, so it’s especially difficult. I’m not frightened by it, but it’s a legitimate concern. People who build good machine learning algos have a fair amount of knowledge about what they’re doing, but they’re still adaptive. It’s natural to have the model retrain itself each night based on each day’s data. Humans might not know what incremental modifications today’s data introduced in this adaptive process. If everybody’s doing this, you can easily imagine various effects. That could be a bad thing from a stability standpoint in the markets.
Our financial markets have become a largely automated adaptive dynamical system, with feedback. Furthermore, a dynamical adaptive system with feedback is challenging ordinarily, but this one is also strategic. A flight controller is dynamical, but not strategic. Add an adversarial environment, and there’s no science I’m aware of that’s up to the task of understanding the potential implications of that system.
The quant meltdown demonstrated the unexpectedly high correlation between quant strategies who believed they were doing different stuff from each other. Under ordinary circumstances these correlations weren’t so apparent. When we decide to exit our positions at the same time, we’re going to painfully realize our correlation.
There’s some echo of that in adaptive algos: if we’re all using somewhat similar adaptive algos on the same data, then the data itself correlates us. If we were forbidden from backtesting, that would help: that’s the way it was in the old days. Pre computers, there was no backtesting. You had a hunch, and you played it. Now, with shared data, this provides an opportunity to correlate our behavior without realizing it.
At any given moment, there aren’t that many ideas which make money. So the firms which have survived — are going to be the ones that discovered those ideas. And they’re doing similar things. The quant meltdown revealed the high correlation between quant trading strategies, and the flash crash was almost more of a microstructure. Something got slowed down in some particular exchange, causing a flight of liquidity from that exchange.
The quant meltldown was more macroscopic, the flash crash showed how microscopic events, with automated algo trading, could be amplified in unexpected ways.
The flash crash was more or less survived, for the most part. The quant meltdown wasn’t, for most groups. These concerns are not going to go away, and there aren’t obvious fixes to them.
What worries me most is the idea that any simple proposals, that are only minorly disruptive, will immunize us. You can prevent the flash crash, but it’ll be something different next time.
Kearns has one idea which he thinks might help in understanding these risks: a public-domain stock-market simulator.
If I was going to make one high-level recommendation, I’d say that research needs to be done. Who builds sophisticated market simulators? Groups that want to use them for profit. I feel that recent events have shown us that we need some kind of science for simulating the vulnerabilities of the market structure that we have.
You can imagine trying to build an ambitious, reasonably faithful simulator of our current markets. You’d have high-frequency algos, shorter-term stuff, dark pools, multiple exchanges, etc. A giant sandbox. Then you’d ask questions: in some simulation, what happens if, for instance, one exchange slows down.
If you do a simulation and you try some perturbation or stress, and it tells you that a disaster happens, then it’s worth thinking hard about our current markets. But if you don’t find a disaster, that’s not reassurance that some other disaster won’t happen.
I’m proposing a quant version of the stress tests that were proposed for banks.
There’s no way, in some sandbox, of testing what all the effects might be. A car company, before they roll out a product, have a lab environment where they put it through tests. And in reality problems which weren’t tested for get discovered. We’d be much better off with a simulator.
We have no such lab for our financial markets. This strikes me as a little off.
People on Wall St think about simulation, but not for catastrophe prediction, just for their own trading purposes.
For some kinds of research, we should be thinking about it as a nation. Almost anything would be better than what we have now, which is nothing, beyond field experiments.
History can only give us insight into the particular catastrophes that have already occurred. We haven’t come near exploring the range of catastrophes which could befall us. There are things we could do to make it better. I applaud looking at the tape, but too much focus on that will give us false reassurance.