What The Oil Spill Can Teach About Financial Catastrophe Probability Forecasts
John Hussman once again provides a very insightful and glorious in its simplicity argument about why the "imminent oblivion" that was facing all of Western civilization is nothing but an obfuscation straw man meant to preserve the bondholder and equityholder interests in the insolvent financial firms, better known these days as the TBTF: "The only reason that bank "failures" in the Depression (and the
"failure" of Lehman) were problematic is that the institutions had to
be liquidated in a disorganized, piecemeal fashion, because there was
no receivership and resolution authority that could cut away the
operating entity and sell it as a "whole bank" entity ex-bondholder and
-stockholder liabilities. I put "failure" in quotations because there
is a tendency to think of such events as something to be avoided even
at the cost of public funds. Failure only means that bondholders don't
get 100 cents on the dollar. As I've repeatedly emphasized (and don't
believe can be emphasized enough), it is essential to invoke the word
"restructuring" wherever possible, because it immediately leads us to
seek constructive solutions between borrowers and lenders, without
public expenditure." In other words, as anyone who has ever looked at a Plan of Reorganization, liquidation does not equal restructuring. As the Lehman bankruptcy showed, there are perfectly salvageable pieces of any investment bank operation that can be promptly integrated into a different business (i.e. the Lehman North American Brokerage business that was acquired by Barclays). This is a topic we have also emphasized from day one - the Administration makes it seem like a bank failure immediately implies liquidation - this is not the case, and this is precisely what FinReg should have focused on. It did not. Yet it will have no choice but to do so, once a new and much larger crash occurs. As for the odds of that happening, Hussman has some other brilliant insights into the probabilities of "worst case scenarios" occurring, in an analysis driven by his observations on the GoM oil spill catastrophe.
"Is this something that could happen once in a million times? Is it something that could happen once in a thousand times, or once every 5,000 times? What exactly are the risks involved?"
President Barack Obama 5/17/10, addressing the Gulf oil spill
As we observe the recent oil spill in the Gulf of Mexico, the recent banking crisis, and the ongoing concerns about sovereign debt in Europe, one of the things that strikes me is that few analysts are much good at assessing probabilities for worst case scenarios.
We typically refer to the probability of some event Y as P(Y), and write the probability of Y, given some information X, as P(Y|X). So for example, the probability of a vehicle being a school bus might be only 1%, but given some extra information, like "the vehicle is yellow and full of children," the estimated "conditional" probability would go up enormously.
With regard to oil spills, however low one might have believed P( we'll have an oil spill ) to be, prior to the recent accident, the "prior" probability estimate should change given that we've now observed one of the worst oil spills in history. Even if the oil industry previously argued that the probability of an oil spill was one in a million, it's hard to hold onto that assessment after the oil spill occurs, unless your faith in the soundness of the technology is entirely unmoved in the face of new information.
See, if P( the technology is flawed | we had an oil spill) is 80%, and P( we'll have another oil spill | the technology is flawed ) is 80%, then regardless of how extremely unlikely you thought oil spills were before we observed one, or how unlikely you thought it was that the technology was flawed, you would now estimate P( we'll have another oil spill | we had an oil spill) at no less than 80% x 80% = 64%*.
While there are about 3800 oil platforms in the Gulf of Mexico, only about 130 deep water projects have been completed, compared with just 17 a decade ago. So in 10 years, applying a new technology, we've had one major oil spill thus far. Unless there is some a priori reason to assume that the technology is pristine, despite the fact that it has failed spectacularly, the first back-of-the-envelope estimate a statistician would make would be to model deep water oil spills as a "Poisson process." Poisson processes are often used to model things that arrive randomly, like customers in a checkout line, or insurance claims across unrelated policy holders. Given one major oil spill in 10 years, you probably wouldn't be way off the mark using an average "arrival frequency" of 0.10 annually.
From that perspective, a simple Poisson estimate would suggest a 90.5% probability that we will see no additional major oil spills from deep water rigs over the coming year, dropping to a 36.8% chance that we'll see no additional major oil spills from deep water rigs over the coming decade. Moreover, you'd put a 36.8% chance on having exactly one more major spill in the coming decade, an 18.4% chance on having two major spills, a 6.1% chance of having three major spills, and a 1.9% chance of having four or more major spills in the coming decade. This is quite a bit of inference from a small amount of data, but catastrophes contain a great deal of information when the "prior" is that catastrophes are simply not possible.
Given that the worst offshore oil spill in Australia's history happened only in November 2009 (which took months to shut down), this sort of estimate does not seem unreasonable. In any event, disasters contain information. It's no longer reasonable to apply previous risk estimates even after we've observed a major spill.
Similarly, before the housing crisis, it might have been tempting to shrug off mortgage defaults as relatively isolated events, since the price of housing had generally experienced a long upward trend over time. Indeed, historically, sustained declines in home prices could be shown to be very low probability events. But as the bubble continued, investors made little attempt to assess the probability of a debt crisis given that home prices had become detached from all reasonable metrics of income and ability to pay. Just as buy-and-hold investors assumed that the long-term return on stocks was constant at about 10%, despite the late 1990's valuation bubble, investors during the housing bubble kept looking at the "unconditional probability" P( credit crisis ) based on decades of normal housing valuations, when they should have recognized that the conditional probability P( credit crisis | extreme housing overvaluation and lax credit standards ) was probably higher. This turned out to be a profound oversight.
But that was evidently not a sufficient lesson. As soon as the surface appearance of the problem was covered up by an expensive and opaque band-aid of government bailouts and suspension of accounting transparency by the FASB, investors went right back to using those unconditional probability estimates. Indeed, until the spike in credit spreads that began a few weeks ago, the amount of additional yield investors demanded for taking credit risk had fallen back to the lows of 2007. We've had a major credit crisis, we have failed to restructure the debt underlying that crisis, and yet investors are approaching the market as if the debt has simply been made whole and we can continue along the former path.
Knowing that the cash flows from mortgage payments cannot possibly be adequate to service the original debt, that delinquencies continue to hit new records, and that there is an enormous overhang of nonperforming debt and unforeclosed homes - it seems utterly naive to assume that the problems we saw over a year ago have been adequately addressed.
What holds for oil also holds for red ink. Disasters contain information. It's no longer reasonable to apply previous risk estimates even after we've observed a major spill.