Quants merge with humans

By Felix Salmon
September 2, 2010
Eleanor Laise has found an interesting trend in the world of quant funds: a lot of them are looking much more human, these days.

" data-share-img="" data-share="twitter,facebook,linkedin,reddit,google" data-share-count="true">

Eleanor Laise has found an interesting trend in the world of quant funds: a lot of them are looking much more human, these days.

Quants are seeking to win back investors not just with their characteristic number crunching but also with a bit of soul searching. Many of the funds’ managers are seeking to make their models a little more like people, by making them more responsive to changing circumstances. That can mean revisiting computer models more often, tweaking their components, or incorporating measures of macroeconomic risk rather than just stock-specific information.

Quant managers need to understand “that financial markets are better understood through the lenses of a biologist rather than a physicist,” says Andrew Lo, a finance professor at the Massachusetts Institute of Technology who also manages quant funds.

My feeling is that what we’re seeing here is the beginning of a kind of Hegelian synthesis in the fund-management world. The thesis was that a talented active manager, with experience and insight, could outperform the market. The antithesis was that active managers, being human, had a distressing tendency to do exactly the wrong thing at exactly the wrong time. Instead, computers had in reality the discipline that human traders have only in theory, and can stick to any given strategy through thick and thin, just as the backtesters intended.

The synthesis, in this view, is that it’s not always smart to stick to a failing strategy, although humans can definitely benefit from the sheer computational power embedded in quant models. So the humans use those models, to a greater or lesser extent, to inform their investment decisions.

At that point, it all comes together: quant funds are the ones which minimize human meddling, while other fund managers who would never consider themselves quants still use sophisticated models to help them pick stocks and strategies. But in reality they’re not so far apart.

I do think that Lo is right, and that it’s never particularly smart to simply stick to a single strategy in an attempt to outperform the market. (In fact, I’m not a big fan of even trying to outperform the market in the first place, but that’s a separate question.) On the other hand, given that human tendency to do the wrong thing at the wrong time, I fear that human-inflected quant funds are just going to end up abandoning strategies just when they would have started to work.

What I’m not seeing, anywhere, is a dynamic quant strategy which automatically makes significant changes to its investment style depending on market conditions. Quant funds tend to be perfected by humans and then released into the wild; any further changes, at that point, are also performed by humans, who don’t trust the computer model to optimize itself on the fly. Sure, computers can buy and sell stocks as conditions change. But they don’t change the rules governing which stocks to buy and sell: those are fixed unless and until humans change them.

That’s probably just as well: I’m not sure the world really needs investment strategies being set without any fund manager having a clue what they actually are. But at the same time, a purely computer-generated strategy might provide some interesting diversification from the madness of crowds. I doubt we’ll see anybody admit to using one any time soon. But for all I know it’s already happening at some hedge fund somewhere. Maybe it’s even the secret of RenTech’s famous and mysterious black box.


We welcome comments that advance the story through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can flag it to our editors by using the report abuse links. Views expressed in the comments do not represent those of Reuters. For more information on our comment policy, see http://blogs.reuters.com/fulldisclosure/2010/09/27/toward-a-more-thoughtful-conversation-on-stories/

I’m pretty skeptical about quant strategies (well, any strategy involving alpha, for that matter). All the same, I’d be interested to know if there have been any attempts to use genetic algorithms to come up with a successful dynamic trading strategy. If you tweaked the market conditions in the simulation to include frequent tail events, you might be able to come up with a particularly resilient strategy that humans wouldn’t think of.

Posted by GingerYellow | Report as abusive

But… looking at the forest rather than the trees shows that a ‘strategy’ of making your strategy more complex ends up making the overall ensemble more random. And therefore, your risk, at best, won’t change.

The basic problem is that you can’t eliminate risk with -any- strategy, even a random one.

Posted by MattF | Report as abusive

Either Robyn Dawes or Ian Ayres wrote about a study done on experts who had access to predictions of a model but could second-guess it. They outperformed experts without access to the model, but did even worse than the naive model by itself. Humans are a source of error.

Posted by TGGP | Report as abusive

It is quite common to link the confidence in a strategy to how well the strategy has done in the past or how well the views have correlated with future market returns. This is also done in the framework from Grinold and Kahn.

I think as more people use/learn the Black-Litterman model or Meucci’s Entropy Pooling you’ll start to see more of this. These models directly incorporate confidence into them. If you’re more confident in a strategy, the inputs you use in optimization are closer to your view than the prior distribution.

One thing I’m planning on working on is a parameter-less approach to setting the confidence levels. Basically, the goal is to automatically adjust confidence in views based on how well they would have done recently (based on returns/alpha/Sharpe or some other statistic, and it could also be weighted to provide the recent past more importance). The benefit of this strategy is that if a view or strategy would not have produced returns recently, then you lose confidence and the posterior distribution becomes more like the prior.

Posted by jmh530 | Report as abusive

Seems to me that jmh530′s approach will be susceptible to black swans – it will succeed until it doesn’t.

Posted by walt9316 | Report as abusive

That’s a valid point walt9316. For instance, if a view is working well over a period and then suddenly results in very bad results. This is very difficult to model. However, what it indicates is that your views were wrong and it means that they should be corrected in the future. If you perform the optimization daily and weigh more recent observations strongly, then it would quickly reduce the confidence in a view that is no longer working. As an alternative, it would also be possible to blend the confidence from above with your own estimate. In practice, if it is clear something really isn’t going to work as well as you thought, you can go in adjust it.

In general, I don’t see the issue of black swans as an issue for portfolio construction. Mean-Conditional VAR optimization can take into account fat tails when constructing the portfolio. The problem is if your prior or your views don’t take into account fat tails, not the optimization procedure. Fail to take into account fat tails at your own risk.

Posted by jmh530 | Report as abusive

walt9316, doesn’t everything succeed until it doesn’t? Bit like saying Taleb is a fan of failing until he doesn’t. All jmh530 needs to do is manage his downside strongly.

Posted by Danny_Black | Report as abusive

Garbage in-garbage out. No more complex than that. Given an erroneous quant model, one will get erroneous results. Unless we develop AI that can change on the fly and learn from itself, we are just trading computer modeling inputs devised by humans, with human judgment. One is inflexible, and the other behavior-dependent human-instinct often irrational. At least the human manager can change strategy–the quants can lose unimaginable amounts of money chasing the inputs of its programmers.

Posted by TaxLawyer | Report as abusive