JP Morgan To Use Algorithms To Predict Which Employees Will Go Rogue

Earlier today we noted that Gregg Berman, the SEC’s HFT “investigator” whose algo investigating hilariously involved the use of a platform created by an HFT firm, will be leaving the regulator later this month. Perhaps he can give Jamie Dimon a ring because as it turns out, JP Morgan is in the process of creating a new division which, much like the SEC, will deploy machines in an effort that’s ostensibly designed to catch people doing things wrong, but which will, in the end, likely prove completely ineffectual at rooting out the type of manipulation and general chicanery that has over time transformed nearly every market into a farcical, manipulated shadow of its former self. 

The idea at JPM is to identify employees who are exhibiting the early signs of going rogue (we would suggest that placing giant notional bets in off-the-run CDS indices would be one such early warning signal) by using algorithms to “monitor dozens of inputs,” like measures of excessive risk-taking and substituting martini lunches for compliance meetings. Here’s more via Bloomberg:

Sally Dewar, head of regulatory affairs for Europe, who’s overseeing the effort. Dozens of inputs, including whether workers skip compliance classes, violate personal trading rules or breach market-risk limits, will be fed into the software.


“It’s very difficult for a business head to take what could be hundreds of data points and start to draw any themes about a particular desk or trader,” Dewar, 46, said last month in an interview. “The idea is to refine those data points to help predict patterns of behavior.”


JPMorgan’s surveillance program, which is being tested in the trading business and will spread throughout the global investment-banking and asset-management divisions by 2016, offers a glimpse into Wall Street’s future. An industry reeling from billions of dollars in fines for the actions of employees who rigged markets, cheated clients and aided criminals is turning to technology to police itself better. Failure to do so will provide ammunition for those pushing to separate trading operations from retail banks…


The program was hinted at in a report published in December on the bank’s website, “How We Do Business,” signed by CEO Jamie Dimon. It outlines ways the firm is improving compliance, including starting a global communications surveillance program.


“We recognized that enhancing market conduct would require using multiple preventive and detective levers in a coordinated way,” JPMorgan said in the report.

If this sounds creepily reminiscent of a sci-fi thriller, that’s because it kind of is:

Technology that predicts behavior, as in the 2002 science-fiction movie “Minority Report,” in which Tom Cruise plays a Precrime officer who hunts down murder suspects before they can act, raises ethical questions.


“What they’re trying to do is forecast human behavior,” said Mark Williams, a former Federal Reserve bank examiner who’s now a lecturer at Boston University’s Questrom School of Business. “Policing intentions can be a slippery slope. Do people get a scarlet letter for something they have yet to do?”

The answer, apparently, is “yes,” and if algos' periodic “mistakes” in the market are any indication of what could happen should a program, through some miscalculation or short circuit, go rogue in the process of identifying human rogues, we suspect there’s the potential for rather unfortunate outcomes. For instance, if one cannot trust the machines to differentiate between a Tesla April fools joke and the real announcement of a new line of cars, we wonder how they’ll be able to distinguish between humor, sarcasm, etc. On a more general level, appointing the same technology that facilitates flash crashes when it’s supposed to be providing liquidity to the position of head HR enforcer might end up creating unintended consequences and, paradoxically, could actually lead to more lawsuits on balance if the technology is better at screwing up than it is at doing its job because it seems to us that an employee who is let go, has their pay docked, or is otherwise aggrieved by a machine that erroneously brands him/her a threat, is very likely to sue. 

Of course when it comes to combating the types of behavior which lead governments to sue banks, there are options other than deploying Skynet — like fostering integrity and improving the corporate culture, both of which seem far more intuitive than subjecting the company’s entire base of human capital to surveillance by artificial intelligence:

New technology is half of a two-pronged effort to reduce legal bills. The other part involves a review of the firm’s culture.

Well, at least it was an afterthought.

In the end though, it’s not really clear what all the fuss is about because as Bloomberg reminds us, JPM really hasn’t been involved in too many questionable activities. About the only things that come to mind are “fraudulent mortgage-bond sales, the $6.2 billion London Whale trading loss, services provided to Ponzi-scheme operator Bernard Madoff and the rigging of currency and energy markets.”

Other than that, everything is on the up and up.