Pentagon labels AI company Anthropic a supply chain risk ‘effective immediately’ : NPR

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c
Pages from the Anthropic website and the company logo are displayed on a computer screen in New York on Thursday, February 26, 2026.

Pages from the Anthropic website and the company logo are displayed on a computer screen in New York on Thursday, February 26, 2026.

Patrick Sison/AP


hide caption

toggle caption

Patrick Sison/AP

The Trump administration is making good on its threat to designate artificial intelligence company Anthropic as a supply chain risk, an unprecedented move that could force other government contractors to stop using AI chatbot Claude.

The Pentagon said in a statement Thursday that it had “formally informed Anthropic management that the company and its products are considered a supply chain risk, effective immediately.”

The move appears to end any possibility of further negotiations with Anthropic, nearly a week after President Donald Trump and Defense Secretary Pete Hegseth accused the company of endangering national security.

Trump and Hegseth announced a series of sanctions threats last Friday, on the eve of the war in Iran, after Anthropic CEO Dario Amodei refused to back down, fearing the company’s products could be used for mass surveillance of Americans or for autonomous weapons.

Amodei said in a statement Thursday that “we do not believe this action is legally valid, and we see no other choice but to challenge it in court.”

The Pentagon statement said: “It is a fundamental principle that the military must be able to use technology for any lawful purpose. The Army will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and thereby endanger our warfighters. »

Amodei countered that the narrow exceptions Anthropic sought to limit surveillance and autonomous weapons “pertain to high-level areas of use, not operational decision-making.”

He said there had been “productive conversations” with the Pentagon in recent days about whether it could continue to use Claude or establish a “smooth transition” if no agreement was reached. Trump gave the military six months to phase out Claude, who is already largely integrated into military and national security platforms. Amodei said it was a priority to ensure warfighters were not “deprived of important tools in the midst of major combat operations.”

Some military contractors were already cutting ties with Anthropic, a rising star in the tech industry that sells Claude to various companies and government agencies. Lockheed Martin said it would “follow the direction of the President and the Department of War” and look to other suppliers of large language models.

“We expect minimal impacts because Lockheed Martin is not dependent on any single LLM vendor for any part of our work,” the company said.

It is unclear how the Department of Defense will interpret the scope of the risk designation. Amodei said a notice Anthropic received from the Pentagon on Wednesday shows it only applies to customers’ use of Claude as a “direct part” of their military contracts.

Microsoft said its lawyers had studied the rule and that the company “may continue to work with Anthropic on non-defense projects.”

Pentagon draws criticism for decision

The Pentagon’s decision to implement a rule designed to respond to supply threats posed by foreign adversaries has drawn widespread criticism. Federal codes defined supply chain risk as the “risk that an adversary could sabotage, maliciously introduce undesirable functions, or otherwise subvert” a system in order to disrupt, degrade, or spy on it.

U.S. Sen. Kirsten Gillibrand, Democrat of New York and a member of the Senate Armed Services Committee and Senate Intelligence Committee, called it “a dangerous misuse of a tool intended to combat adversary-controlled technology.”

“This reckless action is short-sighted, self-destructive and a gift to our adversaries,” she said in a written statement Thursday.

Neil Chilson, a former Republican chief technologist at the Federal Trade Commission who now directs AI policy at the Abundance Institute, said the decision looks like “a massive overreach that would harm both the U.S. AI sector and the military’s ability to acquire the best technology for the American warfighter.”

Earlier today, a group of former defense and national security officials sent a letter to U.S. lawmakers expressing “serious concerns” about the designation.

“Using this authority against a U.S. domestic enterprise is a profound departure from its purpose and sets a dangerous precedent,” former officials and policy experts, including former CIA Director Michael Hayden and retired leaders of the Air Force, Army and Navy, say in the letter.

They added that such a designation is intended to “protect the United States from infiltration by foreign adversaries – from companies beholden to Beijing or Moscow, not from American innovators operating transparently under the rule of law.” Applying this tool to penalize a U.S. company for refusing to remove safeguards against mass domestic surveillance and fully autonomous weapons is a category error whose consequences extend far beyond this dispute. »

Anthropic sees increase in mainstream downloads

While losing important partnerships with defense contractors, Anthropic has seen a surge in consumer downloads over the past week as people sided with its moral stance. More than 1 million people signed up for Claude every day this week, the company said, surpassing OpenAI’s ChatGPT and Google’s Gemini as the top AI app in more than 20 countries on Apple’s App Store.

The dispute with the Pentagon also deepened Anthropic’s bitter rivalry with OpenAI that began when former OpenAI executives, including Amodei, launched Anthropic in 2021.

Hours after the Pentagon punished Anthropic last Friday, OpenAI announced a deal to effectively replace Anthropic with ChatGPT in classified military environments.

OpenAI said it sought similar protections against domestic surveillance and fully autonomous weapons, but then had to change its agreements, leading CEO Sam Altman to say he should not have rushed a deal that “seemed opportunistic and sloppy.”

Amodei also expressed regret for his own role in this “difficult day for the company,” saying Thursday that he wanted to “directly apologize” for an internal memo he sent to Anthropic staff that attacked OpenAI’s behavior and suggested Anthropic was being punished for not giving “dictator praise” to Trump.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button