Michael Lewis has a big profile of Barack Obama in the latest Vanity Fair, and Obama tells Lewis something very interesting:
“Nothing comes to my desk that is perfectly solvable,” Obama tells Lewis. “Otherwise, someone else would have solved it. So you wind up dealing with probabilities. Any given decision you make you’ll wind up with a 30 to 40 percent chance that it isn’t going to work. You have to own that and feel comfortable with the way you made the decision. You can’t be paralyzed by the fact that it might not work out.”
This is very much in line with the m.o. of Larry Summers, and of Bob Rubin before him. Here’s how Steve Rattner explains it, in his book about the auto bailouts:
Larry pressed us to attach probabilities to our recommendations and countered with odds of his own. Like Bob Rubin, with whom the concept is most closely associated, Larry is an enthusiast for “probabilistic decision making,” a method for weighing uncertainties.
This all sounds very scientific — but the problem is, it isn’t. Certainly, if you’re making a decision like whether or not to surge in Afghanistan, you cannot know for sure what the consequences of that decision are going to be. But trying to express things in probabilistic terms is not much of an improvement.
For one thing, there’s no evidence to believe that people are better at subjectively assigning probabilities to outcomes than they are at simply predicting what’s going to happen. People in general — and experts like Summers in particular — tend to suffer from overconfidence bias, and tend to massively overestimate the probability of things they think are going to happen. And if you get too deep into the weeds of probabilistic decision making, you end up multiplying probabilities, and thereby massively compounding your estimation errors.
On top of that, probabilistic decision making tends to live in a binary world: what is the chance of success, and what is the chance of failure? But that kind of analysis can conceal much more than it reveals, and nearly always results in people doing things which have a very high probability of success, no matter how devastating failure would be. This is what I call the Rubin Trade, and is very popular among merger arbs, which is the business Rubin started in. Let’s say that company X has agreed to buy company Y for $100 per share, but if the merger falls through, then Y’s stock will fall to $50. Would you buy stock in Y for $95? People who do that — especially if they leverage themselves in doing so — will normally make a lot of money, and cash large bonuses. But once in a while the trade will blow up. In many ways, the entire financial crisis was the result of a huge global bet that very safe securities were very safe.
If you’re doing probabilistic analysis, then, you really need to be looking at probability distributions, rather than simply drawing a line between success and failure and trying to work out the ratio of the areas on the left and right sides of that line. Specifically, you need to be looking at tail risk, and asking yourself just how bad things could get if they do go wrong. And the flipside of that is that sometimes it’s a good idea to bet on things which are unlikely to happen, just because the payoff should those things happen is so big. That’s basically what a large part of options trading is all about: trying to buy things which are cheap and which will probably expire at zero, but which if they don’t can pay off enormously.
Summers understands all this. He’s one of the more forceful proponents saying that it can be a good idea to deliberately go too far on the side of doing too much, because the risks of doing too little are so great. But it’s hard to build that kind of analysis into probabilistic decision making — unless you’re the person framing the questions. And that’s the real problem with this kind of framework: you can pretty much always get any answer you want, just by being careful about the questions you’re asking, and how you use the answers you get. Let’s go back to Rattner:
At one point, [Summers] confessed that as we gave our answers, he was discounting our probabilities based on what he thought we would say. For example, knowing that Ron was in favor of saving Chrysler, Larry lowered the probability Ron assigned to the success of the alliance with Fiat. The opposite for Harry. Plainly, Larry was loving this debate…
Larry called for a show of hands. His question was precise: “If you assume that the probability is 50 percent or greater that Chrysler would survive for five years, would you save it?”
Diana was unhappy with the phrasing, because she thought Larry was stacking the deck — forcing those who believed Chrysler’s chances were actually slim to assume a higher probability. She had suspected that he wanted to save Chrysler and now was sure of it. While she recognized that under Larry’s formulation she should be voting to save Chrysler, she voted against it anyway, as a kind of protest. Austan felt sandbagged too.
It’s pretty obvious, here, that the reason Larry was loving the debate was precisely because he was controlling it. If you get to unilaterally change other people’s probabilities, and you get to frame all the questions, you can basically give yourself dictator-like powers while ostensibly running a democratic, technocratic, and probabilistic process.
And this is why I worry about the way that Obama has internalized this way of thinking: when he thinks he’s making a choice, in fact he’s always going to end up choosing the option that his advisers have managed to present as having the highest probability of success. And after making those decisions, he won’t lose sleep about their possible downsides or unintended consequences.
For instance, consider the decision to concentrate on healthcare rather than climate change as the Obama administration’s first big legislative push. In the wake of that decision, the chances of getting any kind of climate-change legislation passed became effectively zero — and the negative consequences for the well-being of the planet as a whole could easily end up dwarfing the upside from everything else the Obama team does put together.
Meanwhile, no one really worried, when the healthcare legislation was being negotiated, that the whole thing could end up being struck down by the Supreme Court. It wasn’t, in the end — but it turned out to be very close. When calculating healthcare probabilities, did anybody in the White House properly account for SCOTUS risk? Almost certainly not. And as a result the whole calculation was missing a crucial element.
For Obama, if a decision doesn’t work out, he’s OK with that, since he reckons there’s a statistical certainty that a good third of his decisions will fail to work out. And he takes solace in the idea that so long as the process of arriving at the decision was a good one, by which he means that it was properly technocratic and probabilistic, then he did the best that he could have done.
But that kind of decision-making framework leaves very little room for ideals — for actually putting into practice the kind of vision you have for America. By making decisions on a case-by-case basis, you can end up missing out on building something bigger and much more coherent. In 2008, America voted for a man who was truly excellent at staring into the distance, a man looking at the big picture, and at a centuries-long legacy. Instead, hampered by the financial crisis and by a dysfunctional Congress, they got a man who spends his days weighing success probabilities: a tactician, rather than a strategist.
I’m sure that when Obama accepts his party’s nomination for president in Charlotte on Thursday, there will be lots of moving and rousing rhetoric. But realistically, we cannot hope that a second-term Obama will have much if any ability to make his high-flying dreams a reality. Especially not if he continues to make decisions based on which actions have a 65% probability of success.