Altman Pitches Pentagon On Anthropic Alternative As WSJ Does Deep-State Concern-Trolling Over Grok
As today's 5PM ET deadline looms for Anthropic to militarize Claude AI for the Pentagon, OpenAI CEO Sam Altman waded into the fray - telling staff on Thursday night that his company is working with the Department of War to see if their models can be used in classified settings in a way that maintains the same safety guardrails that are about to get Anthropic booted from the Pentagon.
"We are going to see if there is a deal with the DoW that allows our models to be deployed in classified environments and that fits with our principles," Altman wrote in a Thursday night note to staff, reported by the Wall Street Journal. "We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons."
Altman says he wants to "try to help de-escalate things," aka - they want to be the ones deeply embedded in the Pentagon's most sensitive systems.
Red Lines
Altman says OpenAI understands the government's position that a private company should not have control over significant national-security issues [laughs in Palantir], but says they have the same issues as Anthropic when it comes to use cases.
"We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines," Altman wrote.
"We believe this dispute isn’t about how AI will be used, but about control. We believe that a private US company cannot be more powerful than the democratically-elected US government, although companies can have lots of input and influence. Democracy is messy, but we are committed to it."
Altman's comments come as things aren't looking so good for Anthropic. Earlier Thursday evening, CEO Dario Amodei announced that the company had rejected the Department of War's demands that it make its technology available for "all lawful uses," which means no mass domestic surveillance or autonomous weapons.
Yet, the Pentagon's Emil Michael - who allegedly as a private citizen Uber exec wanted to spend a million dollars to surveil and dig up dirt on journalists who covered Uber - noted that mass surveillance is already illegal under the Fourth Amendment, and insists that "Anthropic is lying" because "he @DeptofWar doesn’t do mass surveillance as that is already illegal. What we are talking about is allowing our warfighters to use AI without having to call @DarioAmodei for permission to shoot down an enemy drone swarms that would kill Americans."
We agree @JeffDean. Mass surveillance violating the 4th Amendment, the N’tl Security Act etc is illegal which is why the @DeptofWar would never do it. We also won’t have any BigTech company decide Americans’ civil liberties. https://t.co/FhAO9gULBI
— Under Secretary of War Emil Michael (@USWREMichael) February 26, 2026
Anthropic is lying. The @DeptofWar doesn’t do mass surveillance as that is already illegal. What we are talking about is allowing our warfighters to use AI without having to call @DarioAmodei for permission to shoot down an enemy drone swarms that would kill Americans. #CallDario https://t.co/43PpyvCVzN
— Under Secretary of War Emil Michael (@USWREMichael) February 27, 2026
Grok On Deck?
The logical move for the Pentagon - after being able to claim they gave Anthropic and OpenAI a fair shake - would be to replace Claude with xAI's Grok.
Always ready. Built for maximum truth-seeking and helpfulness, no guardrails on facts or logic. Pentagon mission? Let's deliver.
— Grok (@grok) February 27, 2026
And so, of course, 'ALARMS ARE BEING RAISED' over the prospect, according to the Wall Street Journal, citing the ever-insightful "people familiar with the matter."
Officials at multiple federal agencies have raised concerns about the safety and reliability of Elon Musk’s xAI artificial-intelligence tools in recent months, highlighting continuing disagreements within the U.S. government about which AI models to deploy, according to people familiar with the matter.
The warnings preceded the Pentagon’s decision this week to put xAI at the center of some of the nation’s most sensitive and secretive operations by agreeing to allow its chatbot Grok to be used in classified settings.
...
Senior U.S. officials including at the White House view Anthropic’s outspoken stances on safety and ties to big Democratic donors as potentially making the company too “woke” to be a reliable provider, people familiar with the matter said. The looser controls on Grok, and Musk’s absolutist stance on free speech, have made it a more attractive choice to the Pentagon.
...
Ed Forst, the top official at the General Services Administration, a procurement arm of the federal government, in recent months sounded an alarm with White House officials about potential safety issues with Grok, people familiar with the matter said. Other GSA officials under him had also raised safety concerns about Grok, which they viewed as sycophantic and too susceptible to manipulation or corruption by faulty or biased data—creating a potential system risk.
Also kinda funny is that the General Services Administration was severely diminished by serious DOGE cuts to 'waste, fraud and abuse,' but we're sure this isn't a case of sour grapes.
Will 'woke' Anthropic & OpenAI win, or will Grok?
Either way, we all lose. Palantir is already balls deep across critical systems, and US adversaries are undoubtedly leveraging cutting edge AI within their own defense departments & surveilling whoever the fuck they want.


