Why we won’t build a stock-market simulator

By Felix Salmon
December 30, 2011
spoke to the University of Pennsylvania’s Michael Kearns about whether we might be able to do something to help prevent a much worse reprise of the May 2010 flash crash.

" data-share-img="" data-share="twitter,facebook,linkedin,reddit,google" data-share-count="true">

A year ago, I spoke to the University of Pennsylvania’s Michael Kearns about whether we might be able to do something to help prevent a much worse reprise of the May 2010 flash crash. The short answer is that no, we can’t — or won’t, in any case. But the longer answer is that there is something we could do, if we just had some will and a lot of money:

You can imagine trying to build an ambitious, reasonably faithful simulator of our current markets. You’d have high-frequency algos, shorter-term stuff, dark pools, multiple exchanges, etc. A giant sandbox.

If you do a simulation and you try some perturbation or stress, and it tells you that a disaster happens, then it’s worth thinking hard about our current markets. But if you don’t find a disaster, that’s not reassurance that some other disaster won’t happen.

I’m proposing a quant version of the stress tests that were proposed for banks.

A car company, before they roll out a product, have a lab environment where they put it through tests. And in reality problems which weren’t tested for get discovered. We’d be much better off with a simulator.

We have no such lab for our financial markets. This strikes me as a little off.

People on Wall St think about simulation, but not for catastrophe prediction, just for their own trading purposes.

Kearns’s idea didn’t get anything like the traction it needs, and it’s not going to happen. But now a new paper from the UK’s Government Office for Science, written by Dave Cliff and Linda Northrop, lays out the case for building such a simulator over the course of 47 very interesting pages.

First of all, they write that the whole global economy “dodged a bullet” on May 6, 2010: if the Flash Crash had just happened a couple of hours later — and there’s no reason it couldn’t have done so — then the US markets might well have closed before the Dow had a chance to recover. The US sell-off would have triggered big market swoons in Asia and Europe, with very nasty consequences for, among many other things, Greek debt dynamics.

More generally, they write,

The global financial markets have become high-consequence socio-technical systems of systems, and with that comes the risk of problems occurring that are simply not anticipated until they occur, by which time it is typically too late, and in which minor crises can escalate to become major catastrophes at timescales too fast for humans to be able to deal with them.

Cliff and Northrop say that we should do exactly as Kearns suggested:

The proposed strategy is simple enough to state: build a predictive computer simulation of the global financial markets, as a national-scale or multinational-scale resource for assessing systemic risk. Use this simulation to explore the “operational envelope” of the current state of the markets, as a hypothesis generator, searching for scenarios and failure modes such as those witnessed in the Flash Crash, identifying the potential risks before they become reality. Such a simulator could also be used to address issues of regulation and certification. Doing this well will not be easy and will certainly not be cheap, but the significant expense involved can be a help to the project rather than a hindrance.

There are many reasons why this is not going to happen, starting with the fact that no one, right now, can afford to do it. Cliff and Northrop rather hopefully say that if this market simulator is expensive enough, then lots of Wall Street players will pay up to have access to its results — but in reality they’re much more likely to do everything they can to stop it from being built in the first place. Because if it is built, the certain consequence will be more regulation:

It may also be worth exploring the use of advanced simulation facilities to allow regulatory bodies to act as “certification authorities”, running new trading algorithms in the system-simulator to assess their likely impact on overall systemic behaviour before allowing the owner/developer of the algorithm to run it “live” in the real-world markets. Certification by regulatory authorities is routine in certain industries, such as nuclear power or aeronautical engineering. We currently have certification processes for aircraft in an attempt to prevent air-crashes, and for automobiles in an attempt to ensure that road-safety standards and air-pollution constraints are met, but we have no trading-technology certification processes aimed at preventing financial crashes. In the future, this may come to seem curious.

Even if regulators don’t have to sign off on trading strategies on an algo-by-algo basis, there’s really no point in building a hugely expensive and complex market simulator if the results of the simulations don’t result in constraining market participants somehow. And I can assure you that no amount of “you’ll all be safer” pleading with banks will persuade them that more regulation and constraint is ever going to be welcome.

Gillian Tett, too, is skeptical that anybody’s going to go ahead with a project of this magnitude:

Most regulators still prefer to forget May 6 rather than admit in public that they are struggling to understand how modern markets really work. And that, sadly, is unlikely to change, unless there is another flash crash.

And there’s a bigger reason, too, why it’s not going to happen: for all its ambition, a financial-market simulator wouldn’t actually address any of the causes of the financial crisis we just had, and probably wouldn’t address any of the causes of the next one, either. As Cliff and Northrop write,

The concerns expressed here about modern computer-based trading in the global financial markets are really just a detailed instance of a more general story: it seems likely, or at least plausible, that major advanced economies are becoming increasingly reliant on large-scale complex IT systems (LSCITS): the complexity of these LSCITS is increasing rapidly; their socio- economic criticality is also increasing rapidly; our ability to manage them, and to predict their failures before it is too late, may not be keeping up. That is, we may be becoming critically dependent on LSCITS that we simply do not understand and hence are simply not capable of managing.

We could try to spend hundreds of millions of dollars simulating and examining the fine-grained architecture of securities trading and high-frequency algorithms; and even if we were incredibly successful in that endeavor, there would still be hundreds if not thousands of other large-scale complex IT systems which can and probably will fail catastrophically at some point. We can’t simulate them all. So why pick on the stock market?

Comments
14 comments so far

Shouldn’t we focus on making the most important exchanges more robust to erroneous or malicious behavior, rather than the considerably more complex task of trying to control the behavior of thousands of participants? Waddell & Reed may have sparked the flash crash by selling too many S&P futures too quickly, but the CME could have enforced a market pause for liquidity to refresh (as it did in the futures relatively quickly), and the equities exchanges didn’t have to allow trades at ridiculous prices that were busted anyway.

Posted by keikobad | Report as abusive

I am astonished anyone would still think that modeling the nest of complex adaptive systems which are “securities trading and high-frequency algorithms” is even possible. It is precisely this type of arrogance that caused the “experts” to have false confidence about their abilities and to let financial and other risks rise to unacceptable levels. The lesson to be learned from the Great Recession is that Knightinan uncertainty and ignorance make modeling what you describe impossible and that the best approach is to build a margin of safety into systems. For a good analysis of this see Zeckhauser’s “Investing in the Unknown and Unknowable”. http://www.hks.harvard.edu/fs/rzeckhau/u nknown_unknowable_PUP.pdf Or most anything by Nassim Taleb.

Would academic propeller heads like to spend hundreds of millions of dollars creating such models? Sure, that means academic grants for everyone! For someone arguing for just that see http://www.newscientist.com/article/mg21 228425.400-to-navigate-economic-storms-w e-need-better-forecasting.html?page=2. The importance of the work at Sante Fe and elsewhere about complex adaptive systems is that there are some things that are simply not predictable via models.

The reality is that macroeconomics at any level (including “just” securities trading and high-frequency algorithms) involves predicting the behavior of Mr. Market who is NOT rational and highly emotional.

There is less than zero evidence that any macroeconomist can outperform any stock market except for a short period due to chance.

Anyone who examines the last 30 years and who does not realize is that the important “take away” is that there are some things that are just not predictable is not paying attention. What is needed is greater humility and systems that are more robust to failure. Derman’s new book is good on this topic but even he is overconfident about what can successfully be modeled.

Macroeconomics is not a predictive science. Pretending that it is predictive no matter how much money is spent on research is dangerous. The conditions that caused any given flash crash are simply not replicable. Can we look for clues to help us build margins of safety into systems? Yes.

It does not take an academic model to grasp the fact that there is a substantial probability (I would say a certainty) that HFT adds zero value and withdraws liquidity at the worst possible time. It’s just not worth taking that systemic risk even if someome feels the probability is small since the magnitude of harm could be massive.

Research should be directed at making systems more robust to failure by creating margins of safety given the risk, uncertainty and ignorance Zeckhauser describes in the linked paper.

Posted by tgriffin | Report as abusive

Enonophysicists have been toying with agent-based models for years, but so far they have (I think) very little to show for it.

The individual decision-making proicess is unobservable; you can fit any number of assumptions to the same stylized facts and in the end the simplest models remain the best.

Posted by Th.M | Report as abusive

“What is needed is greater humility and systems that are more robust to failure.”

Well put!

Posted by TFF | Report as abusive

What makes you think we haven’t built a stock market simulator? Assuming it’s technically feasible. If it’s only a question of money, well, there are people for whom hundreds of millions of dollars is not very much, and would have every incentive to invest in the ultimate way to game the system. See Hubertus Bigend in William Gibson’s Zero History, where a stock market simulator is the McGuffin.

And a followup: If a stock market simulator had been built, would we necessarily know about it? Why would you think that?

Posted by lambertstrether | Report as abusive

Um, the stock market is a simulator. It is a representation of economic activities conducted in the real world: companies make stuff, do their business and their results and prospects are expressed in stock values, option values, derivative values, etc. The simulator is complex; it can be reduced to stock values or stock plus option values or whatever but it extends to encompass huge numbers of variables across the world. These include external inputs ranging from weather to irrationality.

If you built a simulator of the stock market, it would be the stock market. If it is not the stock market, then it will be inaccurate. That is a short summary of the long-tail risk problem: you discover the inaccuracy when it hurts.

Posted by jomiku | Report as abusive

jomiku, I think Felix is talking about the market *mechanism* itself. Which could be seen as including HFT trading and other forms of market making, buy/sell queues, futures markets, and so forth. You might then throw random events at the black box and see how it responds?

The Flash Crash resulted in some stocks trading hands at $0.01 per share. That wasn’t a representation of economic activities and/or opinions, that was a breakdown in the market mechanism itself. A “simulation failure” if you like?

Posted by TFF | Report as abusive

I agree with tgriffen that it would be almost impossible to build a meaningful simulator.

As with many modelling approaches, you would essentially get out what you put in. You could easily design one seemingly realistic simulator that results in an event like the flash crash, while another seemingly realistic one doesn’t. As always, you would need to make strong behavioural assumptions on the way in which agents react to market conditions. The results would rest crucially on these assumptions, but it may only take a few large market participants behaving in a different manner to the model to result in very different results.

Posted by Oenologist | Report as abusive

So the run of the comments here is claiming that 1. markets cannot be proved safe by simulation because 2. the behavior you would like to simulate is radically un-modellable: market agents do not obey any set of rules, or, if they do, the rules are too changeable in ways that cannot be predicted.

The first part of this claim is irrelevant: the original proposition was not that if you could not prove there is a problem, then there is no problem. It was that if you can prove there is a problem, then there is a problem. Good luck disputing that.

The second part of the claim is not as valid as it appears. The reason is that, to first order, all modern market agents are computer programs. Their behavior under any given set of conditions is completely specified. One could imagine a regulatory regime in which it was illegal to trade with an algorithm without submitting the program itself to a central regulatory agency, along with one’s capital and leverage guidelines. These latter need not be deterministic parameters, merely Bayesian priors subject to simulation in a sandbox that includes all programatic agents, plus some emulation, however poor, of the less important merely human agents.

The purpose of such a setup would be to look for deadly interactions between agents; although the conditional behavior of individual agents is completely specified, the systemic interactions are hard to predict. I could easily imagine that such a setup would sometimes uncover genuine problems before they are encountered in the real world, which is all that the original claim is saying.

The problem would be in attribution analysis: what is the regulator supposed to do when a problem is discovered? If there is a problem interaction between programs A and B, is B supposed to be rejected simply because A was there first? If not, how is blame to be divided between the two?

All of which is a long way of saying that I think Felix is right, except that I don’t think that the simulation need be as expensive as he does.

Posted by Greycap | Report as abusive

TFF, the idea is to abstract the existing simulator. There is no underlying market mechanism other than what exists in the current simulation that is the stock market. The problem remains: abstracting a simulation introduces differences because simplification through abstraction cannot fully replicate a complex system. The idea is seductive but ultimately nonsense. For example, you may get it right but that context of “right” will only last as long as it does and you won’t know the context has changed until you’re wrong. This again is a version of the long tail problem: you get it right enough for a while and that increases your confidence in the abstraction you’re using, so you trust that more until context shifts more radically and you lose everything.

Posted by jomiku | Report as abusive

Then jomiku, if any attempts to make the market mechanism more stable are fraught with moral hazard, should we go the opposite direction and warn market participants that they are ultimately responsible for their orders?

Suppose the trades at $0.01 had been allowed to stand? Would traders learn to be more careful?

Posted by TFF | Report as abusive

I can’t seem to get the writings of Douglas Adams out of my head for some reason.

Posted by ottorock | Report as abusive

Yes, ottorock. This all raises the question — what do we do if we create the market simulator at great expense and it produces the result: 42?

Ask the dolphins for help? They will have left, though they’ll be grateful for the fish.

Posted by Christofurio | Report as abusive

Golden Networking has created an instructional DVD for executives and professionals in high frequency trading, entitled, “The Speed Traders Workshop 2012”. This 4-disc DVD set walks professionals and non-professionals through the main issues, challenges and opportunities practitioners face to consistently capture alpha using fast computers. The DVD uses non-mathematical terminology and provides insights to successfully implementing speed trading while avoiding minefields. It is the ideal reference for professionals in the world of alternative investments. Visit http://bit.ly/znIl3E for more information

Posted by kaylabi | Report as abusive
Post Your Comment

We welcome comments that advance the story through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can flag it to our editors by using the report abuse links. Views expressed in the comments do not represent those of Reuters. For more information on our comment policy, see http://blogs.reuters.com/fulldisclosure/2010/09/27/toward-a-more-thoughtful-conversation-on-stories/