Pentagon official recalls ‘whoa moment’ when defense leaders realized how much they need Anthropic
The Defense Department’s reliance on Anthropic’s AI came as a shocking realization that ultimately led to their dramatic schism, according to a top Pentagon official.
Emil Michael, the department’s under secretary for research and engineering as well as its chief technology officer, detailed the events leading up to the public feud in a Friday episode of the All-In podcast.
After the U.S. military’s raid on Venezuela in early January that captured dictator Nicolas Maduro, Anthropic asked Palantir if its AI was used in the operation. While Anthropic has characterized the inquiry as routine, the Pentagon and Palantir interpreted it as a potential threat to their access.
“I’m like, holy shit, what if this software went down, some guardrail picked up, some refusal happened for the next fight like this one and we left our people at risk?” Michael recalled. “So I went to Secretary Hegseth, I said this would happen and that was like a whoa moment for the whole leadership at the Pentagon that we’re potentially so dependent on a software provider without another alternative.”
Until recently, Anthropic’s Claude was the only AI model authorized in classified settings. The San Francisco-based startup has said it’s patriotic and seeks to defend the U.S., but won’t allow its AI to be used in mass domestic surveillance or autonomous weapons.
The Pentagon insisted it would use the AI in lawful scenarios and refused to abide by any limits from the company that would go beyond those constraints.
After failing to reach a compromise last week, President Donald Trump ordered the federal government to stop using Anthropic while giving the Pentagon six months to phase it out. Defense Secretary Pete Hegseth also designated the company a supply-chain risk, meaning contractors can’t use it for military work.
For now, the military continues to use Anthropic during the U.S. war on Iran, as AI helps warfighters identify potential targets at a rapid pace.
During his podcast appearance, Michael raised the concern that a rogue developer could “poison the model” to render it ineffective for the military, train it to hallucinate purposefully, or instruct it to not follow instructions.
He then contacted OpenAI, which eventually reached a similar deal that Anthropic had. Elon Musk’s xAI was also brought into the classified fold, while the Pentagon is trying to get Google’s AI allowed into classified settings too.
“I’m not biased,” Michael said. “I just I want all of them. I want to give them all the same exact terms because I need redundancy.”
He acknowledged that Anthropic had become “deeply embedded” in the department while other AI companies hadn’t pursued enterprise customers as aggressively by providing forward-deployed engineers.
The falling-out between the Pentagon and Anthropic highlighted the clash of cultures between the defense establishment and Silicon Valley, which has its roots in military innovations but has since turned squeamish about seeing its technology used for war.
In fact, a top robotics engineer at OpenAI announced her resignation from the company on Saturday, citing the same concerns Anthropic raised.
“This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got,” Caitlin Kalinowski posted on X and LinkedIn.
First Appeared on
Source link