Measuring total risk

By Felix Salmon
February 7, 2010
Peter Conti-Brown has a new paper proposing the creation of what he calls a Fat Tail Risk Metric, or FTRM. The paper itself is flawed, and the details of how it's constructed would need to be reworked from scratch. But conceptually, the FTRM I think is a good idea. Here's Conti-Brown's abstract:

" data-share-img="" data-share="twitter,facebook,linkedin,reddit,google" data-share-count="true">

Peter Conti-Brown has a new paper proposing the creation of what he calls a Fat Tail Risk Metric, or FTRM. The paper itself is flawed, and the details of how it’s constructed would need to be reworked from scratch. But conceptually, the FTRM I think is a good idea. Here’s Conti-Brown’s abstract:

The paper proposes a legal solution that will create a more robust metric: require mandatory disclosure of a firm’s exposure to contingent liabilities, such as guarantees for the debts of off-balance sheet entities, and all varieties of OTC derivatives contracts. Such disclosures—akin to publicly traded corporations’ filing of 10-Ks with the SEC—will allow regulators and researchers to approximate an apocalyptic, black-out, no-bankruptcy-protection and no-bailout scenario of a firm’s implosion; force firm’s to maintain daily record-keeping on such obligations, a task which has proved difficult in the past; and, most importantly, will open up a crucial subset of data that has, until now, been opaque or completely invisible.

Conti-Brown’s method for coming up with the FTRM involves adding up a firm’s total netted derivatives exposure; the size of its off-balance-sheet vehicles; and its liabilities. That gives a total-risk measure; the FTRM itself is the log of that figure.

There are lots of problems here. For one thing, netting derivatives exposure effectively eliminates an enormous amount of counterparty risk. For another thing, it’s impossible to calculate: if I write a call option on a stock, there’s no limit to how much my contingent liability might be, because there’s no limit to how far that stock can rise. And off-balance-sheet vehicles are just one of a potentially infinite line of entities which remove a company’s legal liability, but where the firm can still end up paying out a lot of money in practice. Think, for instance, the money which Bear Stearns threw at its failed hedge funds, or the money which banks used to make whole the people who invested in auction-rate securities. Those things don’t look like bank liabilities, or even contingent liabilities, until it’s far too late.

But put all that to one side: one thing which doesn’t currently exist, and which would be very useful indeed, is some kind of measure of the total amount of risk in the financial system. A lot of people had a conception, pre-crisis, of some kind of law of the conservation of risk: that tools like mortgage-backed securities simply moved risk from banks’ balance sheets to investment accounts, and therefore, at the margin, actually dispersed risk and made the system safer. What was missed, however, was the fact that total risk was increasing fast, especially as house prices rose and the equity in those houses was converted into financial assets through the magic of second mortgages, cash-out refinancings, and home-equity lines of credit.

Some types of risk are more dangerous than others, of course: if there’s a stock-market bubble, then it’s easy to see that the total value of the stock market, which is the total amount that can be lost in the stock market, has risen a lot. But stock-market investments are a little bit like houses without mortgages: where there’s very little leverage, there’s also relatively little in the way of systemic risk. It’s rare to suffer great harm from the value of your house falling if you don’t have a mortgage.

Still, stock-market bubbles can cause harm, and it’s worth including equities as part of the total risk in the system, along with bonds and loans. That’s one metric which macroprudential regulators should certainly keep an eye on; Conti-Brown’s idea is then basically to try to disaggregate that risk on a firm-by-firm basis, to see which companies have the most risk and to see how concentrated the risk is in a small number of large institutions.

It won’t be easy to do that — indeed, it will be impossible to do it with much accuracy. But even an inaccurate measurement will be helpful, especially if it becomes a time series and people can see how it’s been changing over time. It’s good to know how much risk is out there — and it’s better to know that financial institutions themselves are keeping an eye on that number, and trying to measure it as part of their responsibilities to their regulator.


We welcome comments that advance the story through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can flag it to our editors by using the report abuse links. Views expressed in the comments do not represent those of Reuters. For more information on our comment policy, see

FTRM seems very dangerous and follows yet again the false premise, that with enough data we can understand large complex entities. Most risks express themselves after the fact in the form of concentrations or obfuscation. fannie / freddy / AIG etc. are primarily concentration examples

The US as most developed countries has laws preventing monopoly situations “anti-trust laws” for many goods markets as they are seen as counter productive in the long run. These were probably debated as being anti-capitalist when initiated, but have proven valuable over the long run.

In the same vein I believe we should have a set of anti-concentration and anti-obfuscation laws for any regulated entity. The anti-concentration and anti-obfuscation laws would impact any entity that poses systemic risk should be subject to a concentration metric which limits. Size as a percentage of market share could determine limits the same way anti-trust does.

The amount of risk a given entity may have in a market. In the same thinking I am all for the limitation of banking activities to various arena’s. Call it a Humility law, where we all agree we don’t know and or can’t properly measure interconnected or obfuscated risks in various entities or instruments.

Any entity deemed to pose a risk to the system by its failure, should not be in the system.

Posted by Nick_Gogerty | Report as abusive

Any proposal is better than the current situation which lead Paulsen to pray for help from the almighty one at the peak of the crisis. What else could he do, he was flying in the dark without instruments.

Posted by williambanazi7 | Report as abusive

Conti-Brown here. Felix, excellent critiques. Thanks for engaging the issue. I think, though, that the FTRM survives some, if not all, of “flaws” you’ve identified.

1. Fair point about the netted notional amount eliminating counter-party risks. I’m not wedded to netting derivatives, because the FTRM isn’t about producing with any degree of accuracy the actual dollar amount that an imploding firm would lose — it’s about applying a consistent standard across the entire marketplace that approximates that loss. The goal is to force the loss exposure into the outer boundary of a place where we couldn’t imagine the loss to be bigger. So long as we apply that standard evenly, and there are no obvious risks not included in the metric, then we’ll be on our way to getting the data we need. That’s a long way of saying I think I agree with you — the notional value of the contracts may make more sense than the netted value. I’ll have to look more deeply at those who have argued about the misleading consequences of notional v. netted values (Singh at the IMF has a few papers on this).

2. Re: the impossibility of calculating the sale of calls or any other derivative contract that could go awry to an infinite limit. There are two reasons why FTRM survives this critique. First, we can simply put a coefficient in front of these contracts that will assume away any surprises. For example, we assume that the stock underlying the call grows 1000% in a day, or 10,000% and calculate the FTRM for that contract accordingly. Taleb would say, of course, that even these kinds of exaggerated changes could happen, and there we’d be left holding the bag. That may be true. Maybe stocks can grow in a single day by 10000%. But here’s the second reason why this matters less: if stocks are growing 10000% in a single day, then we’re probably not really in a situation of huge systemic risk. Soaring stocks might cripple the seller of call options, but are less likely to endanger the entire system. Of course, periods of enormous volatility could produce precisely this kind of result, but I’m skeptical for reasons that I’ll save for another time (related to how quickly new calls would have to be sold, at values that would be crippling, in a market of such volatility). Also, the exaggerated coefficient calculation on theoretically infinite exposure contracts would, again, resolve this issue. It doesn’t really matter what the number is, so long as it is applied evenly to all players and all similar contracts.

3. Re: the criticism that off-balance sheet contingent liabilities are ill-defined. I address the issue of Bear Stearns like liabilities in the paper (though not by name, until now: all of these critiques are excellent and will be addressed specifically). The point would be to bring all such contingent liabilities into the FTRM, regardless of whether they are hedge funds, insurance contracts, SIVs, or any other liability that could occur suddenly, and require immediate payment. The value of those guarantees would either be delineated by contract, or would simply be the FTRM of the subsidiary.

4. In response to the first comment to the post, the FTRM explicitly does not assume that we can simply tally up the data and then understand/control all of the complexities of financial contracts and institutions. The main intention here is to probe deeply into the long/fat tails of these kinds of risks, and see what sort of contingent liabilities firms are taking on, and how those values change over time. If we mandate disclosure of these kinds of liabilities (and, as I mention in the paper, I’m not particularly wedded to derivatives and off-balance sheet guarantees alone; I propose them merely as a proxy, and would be delighted to hear of other, more precise proxies), then we can get the data necessary to start teasing out relationships between this kind of risk exposure and bankruptcy, failure, market cap, CDS spreads, and any other relevant variable. This is a proposal, then, for the long-haul: it may not prove its worth for decades. But that doesn’t mean it shouldn’t be disclosed.

One last note about expressing the FTRM in a logarithmic form, rather than in dollar amount. The point here is not only a critique on the current use of VaR as a dollar figure (which is easily decontextualized and misinterpreted), but also because so many of the assumptions in FTRM are near crazy — how can, for example, all a firm’s assets go to zero and its liabilities retain their full book value? The dollar figure that such assumptions produce would simply be unwieldy and non-sensical. The log of that value is meant to express it differently. What that log value actually means won’t be immediately clear. The true import of an FTRM of 11.348, for example, will only be discovered over time and experience.

Apologies for the length of the response. Thanks for engaging the issue. Hopefully others will build on this idea and, eventually, we can get at some of the data that, until now, has either been buried in previous disclosures, or remained completely invisible.

Posted by ContiBrown | Report as abusive