Judge sides with Anthropic to temporarily block the Pentagon’s ban

After weeks of confrontation between Anthropic and the Pentagon, the company took an important step: A judge granted Anthropic a preliminary injunction in its lawsuit, which sought to overturn its government blacklist while the legal process plays out.
“War Department records show that it designated Anthropic as a supply chain risk because of its ‘hostile manner through the press,'” Judge Rita F. Lin, district judge for the Northern District of California, wrote in the order, which takes effect in seven days. “Punishing Anthropic for calling public attention to the government’s contractual position is an illegal and classic First Amendment retaliation. »
The final verdict could take weeks or months.
Anthropic spokeswoman Danielle Cohen said in a statement Thursday: “We are grateful to the court for acting quickly and pleased that they agree that Anthropic is likely to succeed on the merits. While this case is necessary to protect Anthropic, our customers and our partners, our goal remains to work productively with the government to ensure that all Americans benefit from safe and reliable AI.”
“I think this case touches on an important debate,” Judge Lin said during Tuesday’s hearing. “On the one hand, Anthropic says its AI product, Claude, is not safe to use for lethal autonomous weapons and for domestic mass surveillance. Anthropic’s position is that if the government wants to use its technology, it must agree not to use it for those purposes. On the other hand, the War Department says military commanders must decide what is safe for its AI.”
On Tuesday, Judge Lin added: “It is not my role to decide who is right in this debate…The War Department decides which AI product it wants to use and purchase. And everyone, including Anthropic, agrees that the War Department is free to stop using Claude and seek a more permissive AI vendor.” She added: “I see the issue in this case as being…whether the government violated the law when it went beyond that.” »
It all started with a memo sent by Defense Secretary Pete Hegseth on January 9, calling for “any lawful use” to be written into any contract to purchase AI services within 180 days, which would include existing contracts with companies like Anthropic, OpenAI, xAI and Google. Anthropic’s negotiations with the Pentagon lasted weeks, centering around two “red lines” for which the company did not want the military to use its AI: domestic mass surveillance and lethal autonomous weapons (or AI systems with the power to kill targets without human involvement in the decision-making process). The seesawing series of events that followed included a barrage of insults on social media, an official designation of “supply chain risk” that could significantly handicap Anthropic’s business, competing AI companies rushing to make deals, and a subsequent lawsuit.
In its lawsuit, Anthropic claims it was punished for speech protected by the First Amendment and seeks to overturn the supply chain risk designation.
It is rare, if not unheard of until now, for a U.S. company to be designated as a supply chain risk, a designation typically reserved for non-U.S. companies potentially linked to foreign adversaries. The designation of Anthropic as such raised eyebrows across the country and sparked bipartisan controversy due to concerns that disagreement with a presidential administration could potentially lead to outsized retaliation for a company in any industry.
Anthropic’s own operations were significantly affected by the designation, according to its court filings, which indicate that it “received contact from numerous outside partners…expressing confusion about what was expected of them and concerns about their ability to continue working with Anthropic” and that “dozens of companies contacted Anthropic” for advice or information about their rights to terminate their use. Depending on the level to which the government prohibits its contractors from working with Anthropic, the company has alleged that revenues ranging from hundreds of millions to several billions could be at risk.
During Tuesday’s hearing, both companies had the opportunity to answer questions from Judge Lin, which were released in a filing the day before and covered issues such as whether Hegseth lacked the authority to issue certain directives and why Anthropic was designated a supply chain risk. The judge also asked in her opening questions about the circumstances under which a government contractor could be fired for using Anthropic’s technology in its work – for example, “if a department contractor uses Claude Code as a tool to write software for the department’s national security systems, would that contractor be at risk of being fired as a result?”
On Tuesday, the judge also appeared to chastise the War Department for Hegseth’s Post
“You’re here saying, ‘We said it but we didn’t really mean it,'” Judge Lin said during the hearing, later pressing the question of why Hegseth wrote the above barring contractors from working with Anthropic instead of simply designating Anthropic as a supply chain risk.
In a series of questions Tuesday, Judge Lin asked whether the War Department would consider terminating contractors based on their work with Anthropic if it was separate from their work with the department, and a War Department representative responded, “That’s my understanding.”
Judge Lin asked: “Let’s say I’m a military contractor. I don’t provide IT to the military. I provide toilet paper to the military. I’m not going to be fired for using Anthropic – is that correct?” The War Department representative responded: “For non-DoW work, that is my understanding. » But when the judge asked whether a military contractor providing IT services to the War Department, but not for national security systems, could be fired for using Anthropic, the War Department representative did not give a concrete answer.
During the hearing, Judge Lin cited one of the amicus briefs, which she said used the term “attempted corporate murder.” She said: “I don’t know if this is ‘murder’, but it seems like an attempt to cripple Anthropic. »
“We continue to be irreparably harmed by this directive,” a lawyer for Anthropic said during the hearing, citing Hegseth’s nine-paragraph X-rated message.
In a recent court filing, the Defense Department alleged that Anthropic could ostensibly “attempt to disable its technology or preemptively modify the behavior of its model before or during ongoing war operations” in the event it believes the military is crossing its red lines — a theoretical situation that the Pentagon has said it considers an “unacceptable risk to national security.” The judge’s preliminary questions appear to challenge that statement, or at least ask for more information about it, stating: “What evidence in the record shows that Anthropic had continued access or control over Claude after turning him over to the government, such that Anthropic could engage in such acts of sabotage or subversion?”



:max_bytes(150000):strip_icc()/Health-GettyImages-479773918-1e1bc99193f544a09a151889fd5a24c8.jpg?w=390&resize=390,220&ssl=1)