There’s another question beneath all this: Should it be down to tech companies to prohibit things that are legal but that they find morally objectionable? The government certainly viewed Anthropic’s willingness to play this role as unacceptable. On Friday evening, eight hours before the US launched strikes in Tehran, Defense Secretary Pete Hegseth issued harsh remarks on X. “Anthropic delivered a master class in arrogance and betrayal,” he wrote, and echoed President Trump’s order for the government to cease working with the AI company after Anthropic sought to keep its model Claude from being used for autonomous weapons or mass domestic surveillance. “The Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose,” Hegseth wrote.
But unless OpenAI’s full contract will reveal more, it’s hard not to see the company as sitting on an ideological seesaw, promising that it does have leverage it will proudly use to do what it sees as the right thing while deferring to the law as the main backstop for what the Pentagon can do with its tech.
There are three things to be watching here. One is whether this position will be good enough for OpenAI’s most critical employees. With AI companies spending so heavily on talent, it’s possible that some at OpenAI see in Altman’s justification an unforgivable compromise.
Second, there is the scorched-earth campaign that Hegseth has promised to wage against Anthropic. Going far beyond simply canceling the government’s contract with the company, he announced that it would be classified as a supply chain risk, and that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” There is significant debate about whether this death blow is legally possible, and Anthropic has said it will sue if the threat is pursued. OpenAI has also come out against the move.
Lastly, how will the Pentagon swap out Claude—the only AI model it actively uses in classified operations, including some in Venezuela—while it escalates strikes against Iran? Hegseth granted the agency six months to do so, during which the military will phase in OpenAI’s models as well as those from Elon Musk’s xAI.
But Claude was reportedly used in the strikes on Iran hours after the ban was issued, suggesting that a phase-out will be anything but simple. Even if the months-long feud between Anthropic and the Pentagon is over (which I doubt it is), we are now seeing the Pentagon’s AI acceleration plan put pressure on companies to relinquish lines in the sand they had once drawn, with new tensions in the Middle East as the primary testing ground.
If you have information to share about how this is unfolding, reach out to me via Signal (username: jamesodonnell.22).