Deadline looms in AI fight between Anthropic and the Pentagon : NPR

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c
Pages from the Anthropic website and company logos are displayed on a computer screen in New York on Thursday, February 26, 2026.

Pages from the Anthropic website and company logos are displayed on a computer screen in New York on Thursday, February 26, 2026.

Patrick Sison/Associated Press


hide caption

toggle caption

Patrick Sison/Associated Press

The Pentagon is heading for a showdown with Anthropic, one of the world’s most powerful AI companies, over the military use of its AI model after Anthropic’s CEO rejected the Defense Department’s ultimatum to ease security restrictions or be blacklisted from lucrative military work.

At stake are hundreds of millions of dollars in contracts and access to some of the most advanced AI tools on the planet. Here’s what you need to know about the fight and what the consequences could be.

Pentagon and Anthropic disagree on how AI should be used in warfare

For months, Anthropic CEO Dario Amodei insisted that Anthropic’s AI model, Claude, should not be used for mass surveillance in the United States or to power fully autonomous weapons, such as a drone that uses AI to kill targets without human approval. He called these uses “totally illegitimate” and says they are “bright red lines” for the company.

The Pentagon says it has no plans to use Anthropic’s surveillance tools or autonomous weapons. But he says it’s not up to a contractor like Anthropic to make decisions about how its technology is used, and that AI companies including Anthropic must allow the U.S. government to use their tools “for any lawful purpose.”

“Legality is the responsibility of the Pentagon as the end user,” a senior Pentagon official who declined to give his name told NPR this week.

Dario Amodei, CEO and co-founder of Anthropic, at the World Economic Forum in Davos, Switzerland, January 23, 2025.

Dario Amodei, CEO and co-founder of Anthropic, at the World Economic Forum in Davos, Switzerland, January 23, 2025.

Markus Schreiber/Associated Press


hide caption

toggle caption

Markus Schreiber/Associated Press

On Thursday, Amodei said Anthropic could not accept the Pentagon’s latest changes to the terms of its contract.

“I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries,” the CEO wrote in a lengthy statement on the standoff. “Anthropic understands that the War Department, not private companies, makes military decisions. We have never raised objections to particular military operations or attempted to limit the use of our technology for any specific purpose. ad hoc way,” he said.

“However, in a limited number of cases, we believe that AI may undermine, rather than uphold, democratic values,” Amodei continued. He described domestic mass surveillance and fully autonomous weapons as uses “simply outside the bounds of what current technology can do safely and reliably.” These uses “were never included in our contracts with the War Department, and we believe they should not be now,” he added.

Amodei’s rejection comes as Anthropic’s relationship with the Pentagon has become increasingly acrimonious. During a meeting Tuesday between Defense Secretary Pete Hegseth and Amodei, Hegseth threatened to punish the company if it did not comply with the administration’s demands, according to two people with direct knowledge of the meeting who were not authorized to speak publicly.

Defense Secretary Pete Hegseth stands outside the Pentagon in a file photo from January 2026.

Defense Secretary Pete Hegseth stands outside the Pentagon in a file photo from January 2026.

Kevin Wolf/Associated Press


hide caption

toggle caption

Kevin Wolf/Associated Press

A person familiar with the discussion said Hegseth raised the possibility of canceling Anthropic’s $200 million contract with the Defense Department, while a Pentagon official said repercussions could include forcing Anthropic to allow the federal government to use its AI model against its will and blacklisting Anthropic from working with the U.S. military.

“These threats do not change our position: we cannot, in good conscience, grant their request,” Amodei wrote Thursday. “But given the substantial value that Anthropic technology brings to our armed forces, we hope they will reconsider their decision.”

Pentagon sets strict deadline for Anthropic

In an article on X on Thursday, Pentagon spokesperson Sean Parnell warned that Anthropic had until Friday afternoon before the Pentagon took action.

“They have until Friday 5:01 p.m. ET to decide. Otherwise, we will end our partnership with Anthropic and consider them a risk to DOW’s supply chain,” Parnell wrote, using the Pentagon’s renamed “Department of War” acronym.

Anthropic said Thursday that the Pentagon sent the company new contract clauses overnight that, in the company’s view, “made virtually no progress in preventing the use of Claude for mass surveillance of Americans or in fully autonomous weapons.”

The statement continued: “New language presented as a compromise has been coupled with legalese that would allow these safeguards to be ignored at will. Despite DOW’s recent public statements, these narrow guarantees have been at the heart of our negotiations for months. »

Anthropic said it was ready to continue negotiations and “is committed to ensuring the operational continuity of the Department and American warfighters.”

What is “supply chain risk”?

Viewing Anthropic as a supply chain risk would be unusual, according to Geoffrey Gertz, a senior fellow at the Center for a New American Security. That designation is “traditionally used to refer to the technology of foreign adversaries,” he said, such as Chinese telecommunications company Huawei.

It’s unclear what the scope of the Pentagon’s designation would be. This could mean that other Pentagon contractors would be prohibited from using Anthropic’s tools in their work for the Pentagon, or it could prohibit them from using Anthropic’s tools. This second case would be particularly damaging to the company, Gertz said.

At the same time, the Pentagon threatened to invoke the Defense Production Act to force Anthropic to remove its safeguards. That, too, would be an extraordinary measure, Gertz said. The Defense Production Act is designed to give the government control of certain commercial sectors in extraordinary circumstances. It is “traditionally brought up very rarely in true emergency crisis situations,” he said. The goal in this case would likely be to use the law to force Anthropic to ease restrictions on the use of its AI tools.

Gertz noted that these two threats to Anthropic seem somewhat contradictory: “It’s this weird mix where they’re both such a risk that they need to be kicked out of every system, and so essential that they need to be forced into it no matter what,” he said.

Whatever happens at the end of the day, this fight is probably far from over.

The Pentagon’s contract with Anthropic is worth $200 million, a relatively small portion of the company’s $14 billion in revenue. While the Pentagon has similar contracts with other AI companies, including Google, OpenAI and xAI, Anthropic was the first to be cleared for classified use after defense officials deemed it the most advanced and secure model for sensitive military applications.

If the contract was simply canceled, that could be the end, Gertz said. But if the Pentagon tries to force Anthropic to remove its guardrails or give it a broader supply chain risk designation, then the company will almost certainly have to fight back, he predicts.

“Certainly, if the Pentagon seeks to escalate the situation,” Gertz said, “I suspect we will see more legal fights.”

NPR’s Bobby Allyn contributed to this report.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button