How monoculture is like triple-A CDOs

By Felix Salmon
March 3, 2010
Beyond Green, writes asking for a bit more detail about this bit of my locavorism article:

" data-share-img="" data-share="twitter,facebook,linkedin,reddit,google" data-share-count="true">

Tom Laskawy, of Beyond Green, writes asking for a bit more detail about this bit of my locavorism article:

If you only grow one crop, the downside of losing it all to an outbreak is catastrophe. In rural Iowa it might mean financial ruin; in Niger, it could mean starvation.

Big agriculture companies like DuPont and Archer Daniels Midland (ADM), of course, have an answer to this problem: genetically engineered crops that are resistant to disease. But that answer is the agricultural equivalent of creating triple-A-rated mortgage bonds, fabricated precisely to prevent the problem of credit risk. It doesn’t make the problem go away: It just makes the problem rarer and much more dangerous when it does occur because no one is — or even can be — prepared for such a high-impact, low-probability event.

The point here is that a disease-resistant crop is a lot like a triple-A-rated structured bond: they’re both artificially engineered to be as safe as possible. That would be a wonderfully good thing if no one knew that they were so safe. But if you’re aware of a safety improvement, that often just has the effect of increasing the amount of risk you take: people drive faster when they’re wearing seatbelts, and they take on a lot more leverage when they’re buying AAA-rated bonds.

The agricultural equivalent is the move to industrial-scale monoculture, “safe” in the knowledge that lots of clever engineers in the US have made the crop into the agribusiness version of a bankruptcy-remote special-purpose entity.

But the problem is that bankruptcy-remote doesn’t mean that bankruptcy is impossible: just ask the people running Citigroup’s AAA-rated SIVs. If and when the unlikely event eventually happens, the amount of devastation caused is directly proportional to the degree to which people thought they were protected. When something like that goes wrong, it goes very wrong indeed: artificial safety improvements have the effect of turning outcomes binary.

Essentially, you’re trading a large number of small problems for a small probability that at some point you’re going to have an absolutely enormous problem.

And on a long enough time line, even a small probability is bound to happen sooner or later. Which is something that the likes of Bob Rubin would do well to remember.

10 comments

Comments are closed.