print-icon
print-icon

"The World Is In Peril": Anthropic's Safety Boss Quits

Tyler Durden's Photo
by Tyler Durden
Authored...

Authored by Kay Rubacek via The Epoch Times,

Most people have never heard of Mrinank Sharma. That is part of the problem.

Earlier this month, Sharma resigned from Anthropic, one of the most influential artificial intelligence companies in the world.

He had led its Safeguards Research Team, the group responsible for ensuring that Anthropic’s AI could not be used to help engineer a biological weapon.

His final project was a study of how AI systems distort the way people perceive reality. It was serious, consequential work for humankind.

His resignation letter was seen more than 14 million times on X.

It opened with the words, “the world is in peril.”

And it ended with a poem and by announcing that he was leaving one of the most consequential jobs in artificial intelligence to pursue a poetry degree. Yes, you read that right: peril and poetry.

The poem he quoted is, “The Way It Is,” by the American poet William Stafford.

It speaks of a thread that runs through a life—a thread that goes among things that change, but does not change itself. While you hold it, you cannot get lost. Tragedies happen. People suffer and grow old. Time unfolds, and nothing stops it. And the final line: you don’t ever let go of the thread.

Although he didn’t state it explicitly, I argue that that thread is morality. It is the enduring sense that some things are right and some things are wrong—not because a law says so, and not because it is profitable, but because human beings, at their best, have just always known it.

Sharma spent two years watching that thread being let go under pressure, in rooms the public is never shown.

His letter said:

“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions.

“I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society, too.”

He wrote that humanity is approaching a threshold where “our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.”

He wanted to contribute in a way that felt fully in his integrity and to devote himself to what he called “the practice of courageous speech.”

A man who built defenses against bioterrorism concluded that the most important thing he could do next was learn to speak with honesty and courage.

That is a major signal about what is happening behind closed doors in AI research and development.

Many experts have compared the development of AI to the development of the atomic bomb. The Manhattan Project was built in total secrecy. The public had no knowledge of it, no voice in how it was used, and no say in what came after. When it was over, some of the scientists who built it spent the rest of their lives in anguish. Several walked away during the project itself.

Sharma was not alone. Numerous safety researchers have walked off AI projects from multiple companies. These departures may be the only signals we, the public, have, because almost everything else about AI development is happening beyond public view. The internal debates, the safety trade-offs, the negotiations over what this technology will and will not be permitted to do—none of it is being shared with the people whose lives it will most profoundly shape. We are not part of this conversation. We are being presented with outcomes and told to adapt.

John Adams wrote that the Constitution was made only for a moral and religious people, and is wholly inadequate for any other. George Washington warned that liberty cannot survive the loss of shared moral principles. The founders studied the collapse of republics throughout history and arrived at the same conclusion: The machinery of freedom requires a moral people to sustain it. Laws and institutions are not enough on their own. They depend on citizens and leaders who hold themselves to something that exists before the law and above it.

That is the thread of human society, and no AI system holds it. If people allow AI to replace the question of right and wrong with the measure of what is legal and permitted, the machine will carry that measure forward at a scale and speed that no previous generation has had to reckon with.

As Sharma ended his resignation letter, “You don’t ever let go of the thread.”

We are at a crossroads not unlike the one the atomic scientists faced.

Sharma’s resignation was a signal.

The wave of departures before and after it are signals.

The reported tensions between AI companies and government over where moral limits should be drawn are also signals.

Together, they are pointing at something the public has not yet been fully invited to consider: that the most important questions about this technology are being worked out without us, and that the thread of morality, which has always required people to hold it by choice, needs to be part of that conversation.

Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times or ZeroHedge.

Loading recommendations...