print-icon
print-icon

Financial Crisis "Nearly Unavoidable" Without AI Regulation, SEC's Gensler Warns

Tyler Durden's Photo
by Tyler Durden
Wednesday, Oct 18, 2023 - 09:45 AM

US Securities and Exchange Commission (SEC) boss Gary Gensler says a financial crisis brought on by artificial intelligence (AI) will be "nearly unavoidable" without swift and meaningful action by regulators.

In an interview with Financial Times, Gensler suggested that because more and more Wall Street firms are using AI to detect fraud and conduct market surveillance, multiple institutions might base their decisions on the same data models. The ensuing herd mentality could 'undermine stability' and accidentally unleash another crisis that would lead to recession, Gensler said.

"I do think we will in the future have a financial crisis, and in the after-action reports, people will say 'Aha! There was either one data aggregator or one model we've relied on," he said, adding "Maybe it's in the mortgage market, maybe it's in some sector of the equity market."

In other words, there's a new scapegoat in town!

Gensler predicts that the AI-driven financial crisis could happen as soon as the end of the current decade, or the early 2030s.

As the Epoch Times notes, AI regulation is an uphill battle.

While Mr. Gensler does see more regulation around AI as necessary, he does concede that shaping AI regulation will be a "hard challenge."

"It's a hard financial stability issue to address because most of our regulation is about individual institutions, individual banks, individual money market funds, individual brokers; it's just in the nature of what we do," Mr. Gensler said.

"And this is about a horizontal matter, whereby many institutions might be relying on the same underlying base model or underlying data aggregator."

The SEC has already taken some steps on rules around AI. In July, the regulator proposed "conflict of interest" rules that would prevent investment firms from placing their interests ahead of investors, while using predictive data analytics (PDA) or similar AI tech. The proposal has attracted some criticism from firms; chiefly that it could hinder innovation and AI use.

Mr. Gensler isn't the only regulator sounding alarm bells over AI's potential to cause harm. The Federal Trade Commission has previously launched a review of Open AI, the creators of ChatGPT, over concerns around consumer harm and data security.

In September, Senate Majority Leader Chuck Schumer (D-N.Y.) hosted a series of closed-door listening sessions with tech leaders, including Elon Musk, Sam Altman, and Satya Nadella to discuss the regulation of AI.

Mr. Musk previously called all AI labs to "immediately pause" training of systems more powerful than Chat GPT-4 for at least six months in an open letter signed by dozens of artificial intelligence (AI) experts and industry executives. However, Mr. Musk also announced plans to use the tech in new products around the same time.

Forum Criticized for No Real Progress

Many of those present at Mr. Schumer's September AI forum later claimed that the meeting presented little to no real progress on the development of legislation. Sen. John Kennedy (R-La.) noted that the forum heard from 30 speakers, each with only three minutes to talk, leaving little time for meaningful discussion or questions.

“In terms of regulatory suggestions, I didn’t hear much,” he told The Epoch Times at the time.

Sen. Josh Hawley (R-Mo.) said he doubted Mr. Schumer’s sincerity in seeking solutions to protect the U.S. public from AI.

“It’s a little bit like with antitrust the last two years,” Mr. Hawley said at the time.

“He talks about it constantly and does nothing about it. You’ll see. My sense is that a big part of what this is a lot of song and dance that covers the fact that actually nothing is advancing.

The forum came just one day after a Sept. 12 Senate Judiciary subcommittee hearing on the issue of AI regulation received expert testimony that government inaction and weak regulations governing AI development are directly harming Americans while profiting major tech corporations.

Woodrow Hartzog, a professor of law at Boston University, said in the Sept. 12 Congressional Hearing that “half measures” like audits and controls that are implemented after AI systems have already been deployed are putting the safety of American citizens at risk.

Andrew Thornebrooke contributed to this report.

0
Loading...