Why replacing Anthropic at the Pentagon could take months

March 6, 2026
3 min reading
Add us on GoogleAdd SciAm
How exactly does the Pentagon deport Claude?
Replacing one AI model on a classified network with another takes minutes. Retraining people who have learned to use it will take much longer

The Defense Department will gradually remove Anthropic’s Claude from its classified networks within six months, triggering a complex transition for military personnel.
AFP/Stringer/Getty Images
The Pentagon has put Anthropic on the agenda. On Thursday, the Defense Department formally informed the company that it was considered a “supply chain risk” — a label that has turned its artificial intelligence systems, including its flagship model, Claude — into a liability.
The move intensifies a dispute that has been brewing for weeks over Anthropic’s security-focused philosophy — its commitment to limiting how its technology is deployed — and the DOD’s demand for unfettered oversight.
The Pentagon will phase out Claude, one of the world’s most advanced AI models, from its classified networks within six months. On paper, swapping one model for another seems quick. “It’s easy to swap models and install new ones,” according to a source close to Palantir, a defense technology giant that has partnered with Anthropic to host Claude within secure military networks.
On supporting science journalism
If you enjoy this article, please consider supporting our award-winning journalism by subscription. By purchasing a subscription, you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
The hardest part begins after the model disappears, rewiring everything that was built around it.
Claude is what we call a frontier model, an AI capable of performing complex, multi-step tasks on its own. This is not how the DOD currently uses it. Lauren Kahn, a researcher at Georgetown University’s Center for Security and Emerging Technologies and a former Pentagon official, describes its deployment as more akin to a chatbot than a free-roaming agent. Claude sits “on top” of existing software, she says, and only appears in certain places, in tightly controlled corners of a classified environment. And it’s not connected to “effectors,” she says, meaning it can’t “initiate an effect” — a weapon command, for example — “in the real world.”
In late 2024, Anthropic became the first AI company to clear the Pentagon’s classified hurdles. Until recently, Claude was the only publicly known major language model operating in this environment. Accessible through tools like Claude Gov — which has become a favored option for some defense personnel, according to Bloomberg — the system leverages massive data pipelines to transform a flood of unstructured information into readable intelligence. In other words, Claude summarizes the information for the Department of Defense, but he can’t pull the trigger.
Once people rely on a tool, it can be difficult to abandon it. Each integration must be relocated piece by piece. And whatever replaces Claude must pass strict security reviews and approvals before touching a classified system. Software changes within the Pentagon can be “excruciating,” Kahn says. Even something as simple as installing Microsoft Office “takes months and months and months.”
As of press time, Anthropic has not responded to multiple requests for comment from Scientific American. The Defense Department declined to discuss details of the transition.
Unlearning Claude
Each AI model fails in its own way. Operators who have spent months using Claude learn these quirks through trial and error: which prompts land poorly, which outputs require a second look.
Kahn studies automation bias, the tendency of human operators to overdelegate to machines. “I worry about a slightly increased risk of automation bias in the early stages, while they are working out the problems,” she says. People will check Claude’s mistakes while the replacement model makes new ones. The staff most at risk for the transition will be power users who have built the most personalized workflows and who have learned the model’s drawbacks well enough to exploit its strengths.
As Pentagon personnel prepare for operational transition, the messy details of the political standoff are being revealed to the public. On Thursday evening, Anthropic CEO Dario Amodei published a blog post pledging to challenge the government’s “supply chain risk” designation in court, arguing that the law is generally reserved for foreign adversaries. Behind the scenes, the standoff appears to have turned into a game of chicken. Emil Michael, the Pentagon official who led the department’s negotiations with Anthropic, posted on X that talks with the company were dead. And Amodei would do his best to resuscitate them.
Meanwhile, the Department of Defense is already moving on. Hours after Anthropic was officially blacklisted, OpenAI announced that it had signed an agreement to deploy its models on classified military networks, thus securing the contract that its rival had just lost.
Anthropic was willing to risk being kicked out of the U.S. government rather than compromise its security philosophy. His replacement initially agreed to the Pentagon’s request for unfettered operational flexibility, only to hastily add the surveillance guardrails that Anthropic advocated after OpenAI CEO Sam Altman faced enormous internal and public backlash. Maybe the exchange isn’t so simple after all.
It’s time to defend science
If you enjoyed this article, I would like to ask for your support. Scientific American has been defending science and industry for 180 years, and we are currently experiencing perhaps the most critical moment in these two centuries of history.
I was a Scientific American subscriber since the age of 12, and it helped shape the way I see the world. SciAm always educates and delights me, and inspires a sense of respect for our vast and beautiful universe. I hope this is the case for you too.
If you subscribe to Scientific Americanyou help ensure our coverage centers on meaningful research and discoveries; that we have the resources to account for decisions that threaten laboratories across the United States; and that we support budding and working scientists at a time when the value of science itself too often goes unrecognized.
In exchange, you receive essential information, captivating podcasts, brilliant infographics, newsletters not to be missed, unmissable videos, stimulating games and the best writings and reports from the scientific world. You can even offer a subscription to someone.
There has never been a more important time for us to stand up and show why science matters. I hope you will support us in this mission.



