Guest Post: Robots and Reptiles
Submitted by JM
Robots and Reptiles (pdf)
Keywords: Actuarial Office Model, Dynamical Systems, Reliability Theory, Aiko the Fembot
Financial markets are about optimism, forward thinking and the peddling of hope. They are fodder for our great strength and great weakness: to be positive in spite of Pandora’s miseries, and in the next moment believing that things are fine as the canoe goes over the waterfall. The huge difference between how humans think and machine compute deserves some thought.
People use their reptilian id and machines use their literal precision for a common purpose: pattern recognition. Such recognition is built upon hard-won human experience of pleasure and pain, and a good set of specification code for machines. This highlights the weakness of robot processing. Robust pattern recognition requires context. For example, prior to an FOMC meeting, money can be made by putting tight stop-losses on a simultaneous purchase of 3x long and 3x short S&P ETFs. A rule can be built for computer implementation to mimic this, but the rule is unsuitable without context (for lack of better term) that knows when an FOMC meeting actually means something and when it is trivial. Situating a context is the distinctive human method of thinking through intractability.
Trading based on a context isn’t perfect by any means. Context is only relevant in the intervals between liquidity breaks downs: there is a normal, a crisis, then a new normal. Thus trading based on a context works in a “local” sense, but outside a given neighborhood, both intuition and explicit programming fail. Panics are a part of anything people touch. Survival takes thinking outside of the model about nonstationarities. It takes liquidity management. For example, the relative difference between a pricing model output and a valuation model output become irrelevant in the face of panic and liquidation.
Using the working notion of liquidity as the space between crises, it is clear that liquidity stopping times vary by asset. Call a crisis any multi-sigma event. CDOs combine complicated cashflows that are hard to price. Their valuation can become very sensitive to the assets they include and how they are combined.
A way to gauge this sensitivity is with statistical concepts that describe how well one can confidently distinguish the performance of one asset to another: the ratio of signal to noise. The between-asset variation is the signal. The within-CDO measurement error is the noise. Measured on a scale of 0 to 1, 0 = all variability is due to noise or measurement error; 1 = all the variability is due to real differences in asset performance.
Reliability is a function of asset variation and the number of assets bundled in the CDO. CDOs can take on different behavior when within-asset error is increased, and between-asset variation is reduced. Put simply, when all assets within a CDO are strongly correlated, the CDO is subject to big swings up and down. When combined assets have limited or negative correlation, the CDO is more stable. Performance is hard to estimate when the distribution of returns are radically different for assets combined in the CDO.
The bottom line for financial markets is that you buy things that are cheap and you sell them when they are rich. But implementation is never as easy as it seems.
Supercomputers can conduct a sensitivity analysis (allowing some moving parts to change while holding other moving parts constant) for literally millions of special cases. Humans can’t possibly do this. They value things by finding the most closely related asset with a liquid price, determining relevant differences, and factoring in risk premium based on the differences. This is what phynancial types do.
Can a human and robot love connection fuse these latent and formative approaches?
I’ll Get By With a Little Help From My Fembot
Seriously, there are no fully functional “learning and autonomous” fembots yet. I won’t jump into the associated philosophical issues here. Computers are servants to the logical instructions that are designed for the machine to perform. By implication, their performance is based on a programmer’s ability to develop and formulate code for a given problem. Whether computers become sentient is irrelevant. But the standard isn’t really necessary. Computers have already evolved to the point where the problem posed can be quite general and that is in most ways good enough. What is decisive is if they operate with an order of complexity that mimics sentience above some level of tolerance. Deep Blue beating Gary Kasparov is an example of this “good-enough” mimicry. And the example itself highlights the strengths and weaknesses of human and machine reasoning.
How human reasoning sucks compared to fem-bots:
- We forget too often; sometimes we remember bad things too much, sometimes not enough.
- We think slowly.
- We lack formal precision. One wrong step at the beginning screws the whole thing.
- Our brains get tired and we lose concentration. Then we screw up long complicated logical tasks consistently.
- We are emotional; we think with our genitalia and other biological imperatives when other organs would serve better. Our reptilian id gets in the way.
How unimpaired human reasoning busts a cap in computer reasoning:
- We learn by identifying regularities; computers must have a “regularity” defined before they can recognize one.
- Related: we visualize; perhaps this is the hallmark of context.
- We are intuitive. To avoid our weaknesses humans adapt seemingly unrelated solution methods to problems we think are interesting.
- We can amend the central categories of our reasoning. We step outside the box, and create unity from the broken pieces
In some ways these differences are only that: neither advantages nor impairments. In another sense our strengths can be reduced down to an understanding of that hard to define word context. In other senses we have decided disadvantages.
A Robot Approach to Securitization
Let’s start with a basic formulation of asset securitization through robot eyes. Actuaries were the first to design securitization methods using the well-known (to actuaries) office model. Consider the random variable P(t) representing a single mortgage can be aggregated into a very simple security P(t) using the following cash flow-delinquency process:
Ks = cashflow at duration s per mortgage;
A = delinquent payment or obligated payment less than required per bond
I = default indicator (0 if in default at time t, 1 otherwise if xt-s,j is not in default).
The issue here is ensuring sufficient computational resources. But P(t) becomes more specified, indeterminacies can arise that make the summations involved in computing P(t) intractable. To ensure tractability, technical conditions have to be imposed on the model that may cause it to diverge from reality.
“In simple terms, the Gorton Model evaluated the risk of losses on the super senior portion of the CDO bonds; the Gorton Model did not measure the market value of the super senior portion of the CDO bonds, only the risk or likelihood of a default of each of the underlying reference obligations.
“The default rates in the Gorton Model were based upon severe recessionary market scenarios that were modeled to be worse than the worst post World War II recession.
“From the Gorton Model, AIGFP determined the attachment point for the super senior tranche of the CDO, i.e., the point where the model determined that there was sufficient subordination to adequately protect against credit losses, with an additional cushion built in for more protection. AIGFP would only write protection on deals where the Gorton Model showed that the risk of credit losses above this attachment point would be remote. At the time, the Gorton Model gave us confidence that there was an extremely small risk that any of our positions in the MSCDS portfolio would ever experience any sort of loss.
“…in July 2005, I questioned whether in our modeling we needed to consider additional analysis of deals containing large amounts of interest-only loans, deals that were biased towards low FICO scores, and deals that were heavily concentrated in particular geographic regions. Further, I asked whether we should strengthen the process used to evaluate the CDO managers who were in charge of these deals, particularly in non-static deals where the managers had the ability to add and remove collateral within certain limits.
“… the problem that we at AIGFP [AIG Financial Products] and many of our counterparties faced is the simple fact that by the fall of 2007, there was no longer an existing market, much less a liquid market, for the instruments we were trying to price.”
—Andrew Forster, Executive Vice President,
Testimony before the Financial Crisis Inquiry Commission
Was the Gorton Model wrong? No. The Gorton Model evaluated the default risk embedded in securitizations. As such, it worked in the local intervals between crises when liquidity was sufficient. It wasn’t designed to cope with contingencies outside of that domain.
The italicized testimony above points out both the eagle-eye power of human foresight about what could go wrong (it did), and the fragile foundations that confidence creates. If anything, the testimony shows that adherence to models without a robust view of the technical assumptions, the methods used, and the nature (forget probabilities) of adverse contingencies is disastrous.
The Future: Long Live Securitization
There was a time when banks lived off the spread between deposits/wholesale fund rates and loan rates less default costs. There were good times and bad. For those times, taking more term risk on government securities was an option. Due diligence was a time-intensive chore, but a manageable one. It was so because banks held onto their originated loans and monitoring costs were lower.
Banks will not go back to loan-holding Pleasantville-style institutions. Nor will they continue to leverage up to the gills with collateralized collateral on collateral. They will continue to move loans off book and securitize, but CDO structures will be simpler, so that they will remain tractable. Transparent mechanics implies easier pricing.
They aren’t going away because are just too good an idea.
The most interesting part of future securitization is conditioned on whether a wholly different type of reasoning emerges which incorporates the best elements of the human and machine. Reasoning that is intuitive and geometric and able to usefully integrate techniques from unexpected sources. At the same time reasoning that is computationally mistake-free and able to crunch through a multitude of special cases with raw, brute-force computation. Sign me up!
A marriage like that is interesting because neither approaches work perfectly in all contingencies but would work better for all contingencies. Some problems don’t have a solution. Some problems have solutions, but one can never derive them. Some have a solution but getting to them is extremely, extremely hard. Some are tractable, manageable problems. Computers and humans can both solve tractable problems in different ways, but no reasoning of any type can solve truly intractable problems.
Humans manage intractable problems. They fool other humans into thinking that there is a solution. The fooler then can manipulate the resultant herd while acting rationally herself. They can even be overcome by self-delusion. This is a common problem when reputation has strong connections to income and social status.
- advertisements -