Much has been said on these pages and elsewhere about the dangers embedded within quant groupthink, in which an ever increasing prevalence of fewer performing factors means that more and more speculators (note: not investors) line up on the same side of the trade pushing up offers, only to experience a regime change based on some heretofore unexpected exogenous event which renders existing signal translation models useless, and causes all former buyers to join the sellers. Whether that would result in a bidless market remains to be seen. If October 1987 is any indication, all signs point to yes. Yet in a sign that at least the bigger bankers may be anticipating just such an outcome, the Economist has disclosed that JP Morgan, in addition to reserving for general loan loss provisions on its balance sheet, has now taken a $3 billion reserve against quant error (yes, quants can be wrong... and for a lot of money at that). Just how many other investment banks demonstrate this kind of prudence? Without any specific regulatory guidelines for quant capital provisioning, we have no idea. While the bulge brackets may have joined JPM in a comparable form of "insurance" it is a certainty that the thousands of newly cropped up quant trading firms not only have no such reserves, but should a dramatic market reversal transpire, it is inevitable that wholesale asset dumping will have to take place to cover losses. And this assumes no leverage. Is the market prepared for such a contingency?
The ever increasing role of mathematics in financial modeling is no secret. The Economist writes:
By 2007 finance was attracting a quarter of all graduates from the California Institute of Technology. These eggheads are now in the dock, along with their probabilistic
models. In America a congressional panel is investigating the models’
role in the crash. Wired, a publication that can hardly be
accused of technophobia, has described default-probability models as
“the formula that killed Wall Street”. Long-standing critics of
risk-modelling, such as Nassim Nicholas Taleb, author of “The Black
Swan”, and Paul Wilmott, a mathematician turned financial educator, are
now hailed as seers. Models “increased risk exposure instead of
limiting it”, says Mr Taleb. “They can be worse than nothing, the
equivalent of a dangerous operation on a patient who would stand a
better chance if left untreated.”
The Economist focuses initially on modelling as pertains to complex structured finance models: a venture which imploded spectacularly after the housing bubble burst.
The models went particularly awry when clusters of mortgage-backed
securities were further packaged into collateralised debt obligations
(CDOs). In traditional products such as corporate debt, rating agencies
employ basic credit analysis and judgment. CDOs were so complex that
they had to be assessed using specially designed models, which had
various faults. Each CDO is a unique mix of assets, but the assumptions
about future defaults and mortgage rates were not closely tailored to
that mix, nor did they factor in the tendency of assets to move
together in a crisis.
The problem was exacerbated by the credit raters’ incentive to
accommodate the issuers who paid them. Most financial firms happily
relied on the models, even though the expected return on AAA-rated
tranches was suspiciously high for such apparently safe securities. At
some banks, risk managers who questioned the rating agencies’ models
were given short shrift. Moody’s and Standard & Poor’s were assumed
to know best. For people paid according to that year’s revenue, this
was understandable. “A lifetime of wealth was only one model away,”
sneers an American regulator.
Moreover, heavy use of models may have changed the markets they were
supposed to map, thus undermining the validity of their own
predictions, says Donald MacKenzie, an economic sociologist at the
University of Edinburgh. This feedback process is known as
counter-performativity and had been noted before, for instance with
Black-Scholes. With CDOs the models’ popularity boosted demand, which
lowered the quality of the asset-backed securities that formed the
pools’ raw material and widened the gap between expected and actual
defaults (see chart 3).
Yet while the failure in structured finance has been well documented, and the abnormal reliance on such theoretical constructs as the Copula function, for some reason the same thinking has never shifted to other concepts which have a direct intervention in day-to-day, liquid markets such as those involving plain vanilla stocks. Chief among these is the artificial construct of VaR, or Value at Risk.
For some, the crisis has shattered faith in the precision of models and
their inputs. They failed Keynes’s test that it is better to be roughly
right than exactly wrong. One number coming under renewed scrutiny is
“value-at-risk” (VAR), used by banks to measure the risk of loss in a
portfolio of financial assets, and by regulators to calculate banks’
capital buffers. Invented by eggheads at JPMorgan in the late 1980s,
VAR has grown steadily in popularity. It is the subject of more than
200 books. What makes it so appealing is that its complex formulae
distil the range of potential daily profits or losses into a single
Frustratingly, banks introduce their own quirks into VAR
calculations, making comparison difficult. For example, Morgan
Stanley’s VAR for the first quarter of 2009 by its own reckoning was
$115m, but using Goldman Sachs’s method it would have been $158m. The
bigger problem, though, is that VAR works only for liquid securities
over short periods in “normal” markets, and it does not cover
catastrophic outcomes. If you have $30m of two-week 1% VAR, for
instance, that means there is a 99% chance that you will not lose more
than that amount over the next fortnight. But there may be a huge and
unacknowledged threat lurking in that 1% tail.
So chief executives would be foolish to rely solely, or even
primarily, on VAR to manage risk. Yet many managers and boards continue
to pay close attention to it without fully understanding the
caveats—the equivalent of someone who cannot swim feeling confident of
crossing a river having been told that it is, on average, four feet
deep, says Jaidev Iyer of the Global Association of Risk Professionals.
Regulators are encouraging banks to look beyond VAR. One way is to
use CoVAR (Conditional VAR), a measure that aims to capture spillover
effects in troubled markets, such as losses due to the distress of
others. This greatly increases some banks’ value at risk. Banks are
developing their own enhancements. Morgan Stanley, for instance, uses
“stress” VAR, which factors in very tight liquidity constraints.
Like its peers, Morgan Stanley is also reviewing its stress testing,
which is used to consider extreme situations. The worst scenario
envisaged by the firm turned out to be less than half as bad as what
actually happened in the markets. JPMorgan Chase’s debt-market stress
tests foresaw a 40% increase in corporate spreads, but high-yield
spreads in 2007-09 increased many times over. Others fell similarly
short. Most banks’ tests were based on historical crises, but this
assumes that the future will be similar to the past. “A repeat of any
specific market event, such as 1987 or 1998, is unlikely to be the way
that a future crisis will unfold,” says Ken deRegt, Morgan Stanley’s
chief risk officer.
Of course quants will be the first to ridicule stress scenarios, simple due to the way they approach the future which is always, in every situation, merely a mean reversion phenomenon. That the mean itself may have strayed over the past 20-30 years is irrelevant. And until the actual crash occurs, nobody will ever admit that blind faith in regression based models is folly.
For VAR, it may be hopeless at signalling rare severe losses, but the
process by which it is produced adds enormously to the understanding of
everyday risk, which can be just as deadly as tail risk, says Aaron
Brown, a risk manager at AQR. Craig Broderick, chief risk officer at
Goldman Sachs, sees it as one of several measures which, although of
limited use individually, together can provide a helpful picture. Like
a slice of Swiss cheese, each number has holes, but put several of them
together and you get something solid.
Tail risk is indeed nothing new, with some economists such as Nassim Taleb having made a career out of it. Yet it is precisely the nature of statistics, as Taleb will point out, that makes modeling for fat tails so complex. This complexity may come from the very nature of the modelling systems, the clash of various corporate cultures, the infrastructural inability to process enough risk factors, or simply human error.
A report by bank supervisors last October pointed to poor risk
“aggregation”: many large banks simply do not have the systems to
present an up-to-date picture of their firm-wide links to borrowers and
trading partners. Two-thirds of the banks surveyed said they were only
“partially” able (in other words, unable) to aggregate their credit
risks. The Federal Reserve, leading stress tests on American banks last
spring, was shocked to find that some of them needed days to calculate
their exposure to derivatives counterparties.
The banks with the most dysfunctional systems are generally those,
such as Citigroup, that have been through multiple marriages and ended
up with dozens of “legacy” systems that cannot easily communicate with
each other. That may explain why some Citi units continued to pile into
subprime mortgages even as others pulled back.
In the depths of the crisis some banks were unaware that different
business units were marking the same assets at different prices. The
industry is working to sort this out. Banks are coming under pressure
to appoint chief data officers who can police the integrity of the
numbers, separate from chief information officers who concentrate on
system design and output.
Quant Paul Wilmott turned against the prevailing grain a year ago, and in a scathing Op-Ed denounced the self-perceived infallibility of quants. And as always the case, he is right: in their intellectual sophistry the math Ph.D. would be the last to admit that their worldview is wrong (about as bad as economist in that regard). The only issue is economist are generally irrelevant to actual market activities, whereas Fields' Medal winners tend to have an ever greater impact in the market.
The way forward is not to reject high-tech finance but to be honest
about its limitations, says Emanuel Derman, a professor at New York’s
Columbia University and a former quant at Goldman Sachs. Models should
be seen as metaphors that can enlighten but do not describe the world
perfectly. Messrs Derman and Wilmott have drawn up a modeller’s
Hippocratic oath which pledges, among other things: “I will remember
that I didn’t make the world, and it doesn’t satisfy my equations,” and
“I will never sacrifice reality for elegance without explaining why I
have done so.” Often the problem is not complex finance but the people
who practise it, says Mr Wilmott. Because of their love of puzzles,
quants lean towards technically brilliant rather than sensible
solutions and tend to over-engineer: “You may need a plumber but you
get a professor of fluid dynamics.”
Yet the key take home message appears at the very end of the Economist article:
One way to deal with that problem is to self-insure. JPMorgan Chase
holds $3 billion of “model-uncertainty reserves” to cover mishaps
caused by quants who have been too clever by half. If you can make
provisions for bad loans, why not bad maths too?
A terrific rhetorical question, and one which the regulators should certainly take to heart. Because if JP Morgan is wise (or foolish) enough to insure in this manner (even though there is no guarantee that a systemic crash would be covered by an arbitrary $3 billion number) it does provide a perspective on the scale of the problem. Multiply JPM's exposure by several hundred to account for all existing quant participants, most of whom have absolutely no reserve provision, and you get a sense of scale of the problem should a systemic event appear.