[O]ne of my favourite comedians, Eddie Izzard, has a rebuttal that I find most compelling. He points out that “Guns don’t kill people; people kill people, but so do monkeys if you give them guns.” This is akin to my view of financial models. Give a monkey a value at risk (VaR) model or the capital asset pricing model (CAPM) and you’ve got a potential financial disaster on your hands.
James Montier, May 6, 2012
In Q1, [JPM] VaR increased to 81 from 75 in Q4 2011. It is unclear how much of this VaR can be attributed to Bruno Iksil. Since he is a whale we can only assume all his numbers are under the surface.
Zero Hedge, April 13, 2012
See, you're what happens, when you're trying to talk to car people at like half an hour ago. I dismissed these people as idiots years ago. And now that I'm trying to finally engage them... they have no idea what I'm talking about... It's really a simple thing, but either you're kinda tweaked, and you get this joke, which I'm assuming that 90 percent of you does, but you're just coming off the rails, or B, you don't and you're even more confused now. I'm just trying to be the voice of reason, guiding you to the light... I've dismissed this whole game as idiocy the second I started talking to car people, that only occurred to me this morning. Which is either super enigmatic to you, or else you're starting to feel the Band-Aid come off... I dismissed Bank of America as lunatics 8 months ago and I seemed to understand the world then but now the lunatics are in the game, I don't know where they're playing. I've tried for the last 3 hours to communicate with bank of America on TV because apparently either I've gotten off the bus in crazy town, but I am much happier just trying to get them to speak to me in 5 words or less...
Jeff Macke, May 2009
Financial people usually start off with the best intentions: they go to school, learn a bunch of stuff, then they apply it in the real world, and make good money for a while. Then the money is not good enough, so they apply more and more leverage, encouraged by their peers who tell them "go for it, it's all good" until one day: BOOM: you get a beached whale. Everything blows up, because the "equations" one followed when determining trade risk parameters and general return end up being total bullshit. Why? Because no matter how hard the world's economists may try, finance is ultimately an irrational, unpredictable, unquantifiable, and very much irrational endeavor which especially over the short-term, follows nothing quite as much as 100% irrational swings in human behavior. Apply massive leverage to this and you end up with a catastrophe. One such equation of course is VaR, which as jP Morgan just pointed out a few days ago, is not only absolutely worthless, but is by far the biggest component of the financial Garbage In, Garbage Out conveyor. The problems only get compounded when one has no visibility into what asset are being VaR'ed and one has to take the word of other flawed individuals. Like Jamie Dimon in this case, whose statement that the #FailWhale is a "tempest in a teapot" will soon haunt him in Congressional hearings. Which then brings the question: if all the fundamental tenets of finance are flawed, where does that leave us, and will we all flame out in a moment of (paradoxical) psychotic breakdown-cum-clarity like Jeff Macke back in 2009, which for all its hilarity at least tried to warn us: "I'm just trying to be the voice of reason, guiding you to the light." Unfortunately, as so often happens, the voice of reason just happens to be the most insane one. That it happened to come from a person who otherwise comments in a very institutionalized context only adds to the irony.
Luckily, today in quite a rational presentation, we have GMO's James Montier releasing his full remarks from his speech delivered at the CFA Conference on May 6, 2012 (before the JPM announcement), and which predicted how, once again, base human error and would be attributed to mathematical and statistical "error."
from The Flaws Of Finance by James Montier of GMO
As a child, watching my parents write postcards whilst we were all on holiday was an instructive experience. My mother would meticulously write out the card, scattering a few interesting holiday tidbits within the text. My father, whose sum total of postcards sent was invariably just one (to his office), opted for a considerably more efficient approach. His method is shown at the left in Exhibit 1.
I think we can construct a similar diagram to explain the Global Financial Crisis (GFC), represented at the right in Exhibit 1. In essence, the GFC seems to have sprung from the interaction of the following four “bads”: bad models, bad behaviour, bad policies (which is really just bad behaviour on the part of central banks and regulators), and bad incentives.
In an effort to rethink finance, I want to examine each of these factors in turn, beginning with bad models.
Bad Models, or, Why We Need a Hippocratic Oath in Finance
The National Rifle Association is well-known for its slogan “Guns don’t kill people; people kill people.” This sentiment has a long history and echoes the words of Seneca the Younger that “A sword never kills anybody; it is a tool in the killer’s hand.” I have often heard fans of financial modelling use a similar line of defence.
However, one of my favourite comedians, Eddie Izzard, has a rebuttal that I find most compelling. He points out that “Guns don’t kill people; people kill people, but so do monkeys if you give them guns.” This is akin to my view of financial models. Give a monkey a value at risk (VaR) model or the capital asset pricing model (CAPM) and you’ve got a potential financial disaster on your hands.
The intelligent supporters of models are always quick to point out that financial models are, of course, an abstraction from reality. Just as physicists can study worlds without frictions, financial modelers should not be attacked for trying to reduce the complexity of the “real world” into tractable forms.
Finance is often said to suffer from Physics Envy. This is generally held to mean that we in finance would love to write out complex equations and models as do those working in the field of Physics. There are certainly a large number of market participants who would love this outcome.
I believe, though, that there is much we could learn from Physics. For instance, you don’t find physicists betting that a feather and a brick will hit the ground at the same time in the real world. In other words, they are acutely aware of the limitations imposed by their assumptions. In contrast, all too often people seem ready to bet the ranch on the flimsiest of financial models.
Someone intelligent (if only I could remember who!) once opined that rather than breaking the sciences into the usual categories of “Hard” and “Soft,” they should be split into “Easy” and “Difficult.” The “Hard” sciences are generally “Easy” thanks to the ability to perform repeated controlled experiments. In contrast, the “Soft” sciences are “Difficult” because they involve trying to understand human behaviour.
Put another way, the atoms of the feather and brick don’t try to outsmart and exploit the laws of physics. Yet financial models often fail for exactly this reason. All financial model underpinnings and assumptions should be rigorously reviewed to find their weakest links or the elements they deliberately ignore, as these are the most likely source of a model’s failure.
Let’s take the CAPM as an example of the way in which behaviour is assumed to be exogenous in finance, and how this creates problems. The CAPM can be constructed from four assumptions:
1. Transaction costs and other illiquidities can be ignored.
2. All investors hold mean-variance-efficient portfolios.
3. All investors hold the same correct beliefs about mean variance and covariances of securities.
4. Every investor can lend all she or he has, or can borrow all she or he wants at the risk-free rate.
In case you object to the idea of a risk-free rate in assumption 4, this can be replaced by the following:
4a. It is possible to take a long or short position of any size in any risky asset.
Essentially this model says that the only “risk” is volatility (assumption 3), that illiquidity can be ignored (assumption 1), and that leverage is freely available and can be deployed without any consequences (assumption 4 or 4a). Those following this model will seek to leverage up illiquid assets. We have seen this movie before! Anyone remember the saga of Long-Term Capital Management?
Their business model (largely informed by more complex versions of the CAPM) involved trades such as buying off-the-run government bonds (e.g., 9½-year bond), and shorting the on-the-run equivalent bonds (e.g., 10-year benchmark bond), and then leveraging up. Basically, they were applying leverage to an illiquid asset. The dénouement for the LTCM movie is displayed in Exhibit 2.
The atoms (market participants) of financial models are not inert. They either ignore the weaknesses of the model or actively seek out and exploit the model’s weak links.
In the GFC it wasn’t CAPM, but rather models such as Value-at-Risk (VaR) that created problems. It is noteworthy that VaR has been a villain in financial crises before – it was, for example, LTCM’s chosen form of risk management.
Its problems have been known for a long time. Indeed all of the “bads” identified in this note had been discussed by many, including me, ahead of the crisis, so this is far from an exercise in hindsight bias.
Using VaR is like buying a car with an airbag that is guaranteed to fail just when you need it, or relying upon body armour that you know keeps out 95% of bullets! VaR cuts off the very part of the distribution of returns that we should be worried about: the tails.
Exhibit 3 shows the feedback loops embedded within a typical VaR approach. Most VaR calculations use trailing volatility and correlation inputs. When these decrease, the calculated VaR falls, allowing the users to increase their leverage as they now have “less risk.” Of course, the reverse is also true. When volatilities and correlations rise, VaR will also increase, and the users will be forced to deleverage. If everyone is using VaR, the potential for system wide problems is clear, as the model itself acts as a transmission mechanism between institutions (a classic example of Minsky’s adage that stability begets instability).
The problems inherent in VaR are further amplified by the use of short runs of data to estimate the inputs. This creates an even more pro-cyclical element, adding to the problems of VaR. If the immediate past is a period of tranquillity, then the future is held to be the same. If a risky asset, let’s say a CDO, happens to have been less volatile than U.S. treasuries over the last couple of years, the model says (with a straight face) that the CDO is less risky than treasuries!
VaR is extremely vulnerable to peso problems. Peso problems are really just situations where the proverbial poop [Ed: No, James did not use this phrase.] has yet to hit the fan. For instance, if you are running a currency carry trade and buying currencies with high interest rates and shorting those with low interest rates, your returns may look great as long as no devaluations occur. However, the high interest rates may simply be compensation for an expected devaluation. When this occurs your returns are often annihilated (see Exhibit 4). Yet in the run-up to the devaluation, a VaR approach will say you have essentially no risk!
Blinded by “Science”
All of this begs the question as to why on earth people keep using VaR (and other clearly busted financial models). Partially, it is a function of the fact that regulators adopted it (more on this below), and in part it is because VaR has a veneer of mathematical and expert support.
These two play to our human weaknesses. Neuroscientists have discovered an alarming aspect of our interaction with “experts.” Using MRI technology, they busily recorded brain activity while study participants made simulated financial decisions.1 During each round, the participants had to choose between receiving a risk-free payment and trying their chances in a lottery. In some rounds they were presented with advice from an “expert economist” (surely an oxymoron if ever one existed) as to which alternative was preferable.
The results of this experiment are worrying. Expert advice attenuated activity in areas of the brain that correlate with valuation and probability weighting. In other words, the expert’s advice made the brain switch off some processes required for financial decision making. The players’ behaviour simply echoed the expert’s advice. Unfortunately, the expert advice given in the experiment was suboptimal, meaning that participants could have done better had they weighed the options themselves. An example of “bad advice driving out good.”
Other behavioural biases conspire to blind us to the flaws of bad models. For instance, a behaviour known as anchoring can make even irrelevant numbers powerful inputs into our decision making. To demonstrate this, consider the exercise I have given to over 600 professional fund managers:
Before you answer the next question, please write down the last four digits of your telephone number. Now, estimate the number of physicians that work in London.
Oddly, those with a telephone number of 7000 or higher think there are around 8,000 doctors working in London. Those with telephone numbers of 3000 or lower think there are around 4,000 doctors working in London. I don’t have a clue as to how many doctors there are in London, but I’m sure that my guess should be unrelated to my telephone number.
Others have shown that judges were influenced by irrelevant anchors when setting jail sentences, even when they were fully aware of the irrelevance of the input. In one study, judges were asked to roll dice to determine the sentencing requests from the prosecution. The pair of dice they used was loaded to give either a low number (1,2) or a high number (3,6). Having rolled the dice, the judges were told to sum the scores with the result representing the prosecution’s request for jail time. Having rolled the dice themselves, the judges could clearly see that the input was totally irrelevant. However, the group who received the total score of 3 issued an average sentence of 5.3 months; those who received a total score of 9 issued an average sentence of 7.8 months! A 50% increase in jail time based purely on the roll of dice. Simply giving people irrelevant inputs starts to bias behaviour. Think about this in the context of VaR output!
Indeed, relying on the output of a computer has been found to create problems in general. Skitka et al. were the first to document the problem (although it has been confirmed in a wide range of contexts). They asked participants to follow a set of tasks designed to simulate the types of monitoring and tracking tasks required in flying commercial aircraft. This involved a computer simulation of “flying” a plane, complete with dials and computers. During the course of the simulation, some problems were highlighted by the central computer, others by the dials alone. The participants were much more likely to respond to the central computer warning, and they often missed the warnings from the dials if the central computer didn’t flash as well. This was despite the fact that the participants were told the dials and gauges were 100% accurate and the computer was only “highly but not perfectly reliable.” We humans love to defer to authority even when that authority is a computer!
Outside of the research lab, UBS, in their mea culpa to shareholders, confessed that the overly narrow focus on VaR had contributed to their problems:
The historical time series used to drive VaR…[were] based on five years of data…sourced from a period of relatively positive growth…hindsight suggests…did not attribute adequate weight to the significant growth in the U.S. housing market…and Subprime… no attempt to capture more meaningful attributes…such as defaults, loan to value ratios, or other similar attributes…Market Risk Committee relied on VaR…even though delinquency rates were increasing and origination standards were falling in the U.S. mortgage market.
In general, anything requiring advanced mathematics should be treated with extreme suspicion when it comes to financial applications. As is so often the case, Ben Graham got there way ahead of the rest of us:
Mathematics is ordinarily considered as producing precise and dependable results; but in the stock market the more elaborate and abstruse the mathematics the more uncertain and speculative are the conclusions we draw therefrom…whenever calculus is brought in, or higher algebra, you could take it as a warning signal that the operator was trying to substitute theory for experience, and usually also to give speculation the deceptive guise of investment.
As I was going to press with this paper, one of my colleagues sent me a priceless article written in 1970 by John Siegfried. The point the paper makes is as powerful today as when it was originally composed: complexity to impress is all too common.
For instance, this is how finance sees the world:
This is how my 3-year old daughter would see the same equation:
1 + 1 = 2
I have long argued that what we need in finance is a version of the Hippocratic Oath. It warmed the cockles of my heart when I read the Modeler’s Oath, written by Emanuel Derman and Paul Wilmott.
The Modeler’s Hippocratic Oath
I will remember that I didn’t make the world, and it doesn’t satisfy my equations.
Though I will use models to boldly estimate value, I will not be overly impressed by mathematics.
I will never sacrifice reality for elegance without explaining why I have done so.
Nor will I give the people who use my model false comfort about its accuracy. Instead I will make explicit its assumption and oversights.
I understand that my work may have enormous effects on society and the economy, many of them beyond my comprehension.
Now if only it could be included in the CFA code of ethics.
Continue reading here