Comments on: Measuring total risk http://blogs.reuters.com/felix-salmon/2010/02/07/measuring-total-risk/ A slice of lime in the soda Sun, 26 Oct 2014 19:05:02 +0000 hourly 1 http://wordpress.org/?v=4.2.5 By: ContiBrown http://blogs.reuters.com/felix-salmon/2010/02/07/measuring-total-risk/comment-page-1/#comment-11840 Mon, 08 Feb 2010 19:03:00 +0000 http://blogs.reuters.com/felix-salmon/?p=2488#comment-11840 Conti-Brown here. Felix, excellent critiques. Thanks for engaging the issue. I think, though, that the FTRM survives some, if not all, of “flaws” you’ve identified.

1. Fair point about the netted notional amount eliminating counter-party risks. I’m not wedded to netting derivatives, because the FTRM isn’t about producing with any degree of accuracy the actual dollar amount that an imploding firm would lose — it’s about applying a consistent standard across the entire marketplace that approximates that loss. The goal is to force the loss exposure into the outer boundary of a place where we couldn’t imagine the loss to be bigger. So long as we apply that standard evenly, and there are no obvious risks not included in the metric, then we’ll be on our way to getting the data we need. That’s a long way of saying I think I agree with you — the notional value of the contracts may make more sense than the netted value. I’ll have to look more deeply at those who have argued about the misleading consequences of notional v. netted values (Singh at the IMF has a few papers on this).

2. Re: the impossibility of calculating the sale of calls or any other derivative contract that could go awry to an infinite limit. There are two reasons why FTRM survives this critique. First, we can simply put a coefficient in front of these contracts that will assume away any surprises. For example, we assume that the stock underlying the call grows 1000% in a day, or 10,000% and calculate the FTRM for that contract accordingly. Taleb would say, of course, that even these kinds of exaggerated changes could happen, and there we’d be left holding the bag. That may be true. Maybe stocks can grow in a single day by 10000%. But here’s the second reason why this matters less: if stocks are growing 10000% in a single day, then we’re probably not really in a situation of huge systemic risk. Soaring stocks might cripple the seller of call options, but are less likely to endanger the entire system. Of course, periods of enormous volatility could produce precisely this kind of result, but I’m skeptical for reasons that I’ll save for another time (related to how quickly new calls would have to be sold, at values that would be crippling, in a market of such volatility). Also, the exaggerated coefficient calculation on theoretically infinite exposure contracts would, again, resolve this issue. It doesn’t really matter what the number is, so long as it is applied evenly to all players and all similar contracts.

3. Re: the criticism that off-balance sheet contingent liabilities are ill-defined. I address the issue of Bear Stearns like liabilities in the paper (though not by name, until now: all of these critiques are excellent and will be addressed specifically). The point would be to bring all such contingent liabilities into the FTRM, regardless of whether they are hedge funds, insurance contracts, SIVs, or any other liability that could occur suddenly, and require immediate payment. The value of those guarantees would either be delineated by contract, or would simply be the FTRM of the subsidiary.

4. In response to the first comment to the post, the FTRM explicitly does not assume that we can simply tally up the data and then understand/control all of the complexities of financial contracts and institutions. The main intention here is to probe deeply into the long/fat tails of these kinds of risks, and see what sort of contingent liabilities firms are taking on, and how those values change over time. If we mandate disclosure of these kinds of liabilities (and, as I mention in the paper, I’m not particularly wedded to derivatives and off-balance sheet guarantees alone; I propose them merely as a proxy, and would be delighted to hear of other, more precise proxies), then we can get the data necessary to start teasing out relationships between this kind of risk exposure and bankruptcy, failure, market cap, CDS spreads, and any other relevant variable. This is a proposal, then, for the long-haul: it may not prove its worth for decades. But that doesn’t mean it shouldn’t be disclosed.

One last note about expressing the FTRM in a logarithmic form, rather than in dollar amount. The point here is not only a critique on the current use of VaR as a dollar figure (which is easily decontextualized and misinterpreted), but also because so many of the assumptions in FTRM are near crazy — how can, for example, all a firm’s assets go to zero and its liabilities retain their full book value? The dollar figure that such assumptions produce would simply be unwieldy and non-sensical. The log of that value is meant to express it differently. What that log value actually means won’t be immediately clear. The true import of an FTRM of 11.348, for example, will only be discovered over time and experience.

Apologies for the length of the response. Thanks for engaging the issue. Hopefully others will build on this idea and, eventually, we can get at some of the data that, until now, has either been buried in previous disclosures, or remained completely invisible.

]]>
By: williambanazi7 http://blogs.reuters.com/felix-salmon/2010/02/07/measuring-total-risk/comment-page-1/#comment-11821 Mon, 08 Feb 2010 00:11:15 +0000 http://blogs.reuters.com/felix-salmon/?p=2488#comment-11821 Any proposal is better than the current situation which lead Paulsen to pray for help from the almighty one at the peak of the crisis. What else could he do, he was flying in the dark without instruments.

]]>
By: Nick_Gogerty http://blogs.reuters.com/felix-salmon/2010/02/07/measuring-total-risk/comment-page-1/#comment-11812 Sun, 07 Feb 2010 17:05:15 +0000 http://blogs.reuters.com/felix-salmon/?p=2488#comment-11812 FTRM seems very dangerous and follows yet again the false premise, that with enough data we can understand large complex entities. Most risks express themselves after the fact in the form of concentrations or obfuscation. fannie / freddy / AIG etc. are primarily concentration examples

The US as most developed countries has laws preventing monopoly situations “anti-trust laws” for many goods markets as they are seen as counter productive in the long run. These were probably debated as being anti-capitalist when initiated, but have proven valuable over the long run.

In the same vein I believe we should have a set of anti-concentration and anti-obfuscation laws for any regulated entity. The anti-concentration and anti-obfuscation laws would impact any entity that poses systemic risk should be subject to a concentration metric which limits. Size as a percentage of market share could determine limits the same way anti-trust does.

The amount of risk a given entity may have in a market. In the same thinking I am all for the limitation of banking activities to various arena’s. Call it a Humility law, where we all agree we don’t know and or can’t properly measure interconnected or obfuscated risks in various entities or instruments.

Any entity deemed to pose a risk to the system by its failure, should not be in the system.

]]>