Anthropic Supply-Chain Risk Label Should Stay In Place, Appeals Court Says

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

Anthropic “does not met the strict requirements” to temporarily lose the supply chain risk designation imposed by the Pentagon, a U.S. appeals court in Washington, D.C. ruled Wednesday. The ruling conflicts with one issued last month by a lower court judge in San Francisco, and it was not immediately clear how the conflicting preliminary rulings would be resolved.

The government sanctioned Anthropic under two different supply chain laws with similar effects, and the courts in San Francisco and Washington DC are each ruling on only one of them. Anthropic said it was the first U.S. company to be designated under the two laws, which are typically used to punish foreign companies that pose a national security risk.

“Granting a stay would require the U.S. military to prolong its relationship with an undesirable provider of critical AI services in the midst of a significant ongoing military conflict,” the three-judge appeals panel wrote Wednesday in what they described as an unprecedented case. The panel said that while Anthropic could suffer financial harm from the pending designation, they did not want to risk “substantial judicial imposition on military operations” or “lightly overriding” the military’s national security judgments.

The San Francisco judge had ruled that the Defense Department likely acted in bad faith against Anthropic, motivated by frustration with the AI ​​company’s proposed limits on how its technology could be used and its public criticism of those restrictions. The judge ordered the supply chain risk label removed last week, and the Trump administration complied by restoring access to anthropogenic AI tools within the Pentagon and the rest of the federal government.

Anthropic spokeswoman Danielle Cohen said the company is grateful that the Washington, D.C., court “recognized that these issues need to be resolved quickly” and remains confident that “the courts will ultimately agree that these supply chain designations were unlawful.”

The Defense Ministry did not immediately respond to a request for comment.

These cases test the power of the executive branch over the conduct of technology companies. The battle between Anthropic and the Trump administration also plays out as the Pentagon deploys AI in its war against Iran. The company argued that it was being illegally punished for insisting that its Claude AI tool lacked the precision needed for certain sensitive operations, such as carrying out deadly drone attacks without human supervision.

Several experts on government procurement and corporate rights told WIRED that Anthropic has a strong case against the government, but that courts sometimes refuse to overturn the White House on issues related to national security. Some AI researchers said the Pentagon’s actions against Anthropic “chill the professional debate” about the performance of AI systems.

Anthropic claimed in court that it lost business because of the designation, which government lawyers say prohibits the Pentagon and its contractors from using the company’s Claude AI on military projects. And as long as Trump remains in office, Anthropic may not be able to regain the important place it once held in the federal government.

Final rulings in the two lawsuits filed by the company could be expected months away. The court in Washington, D.C., is scheduled to hear oral arguments on May 19.

So far, the parties have revealed few details about exactly how the Defense Department has used Claude or what progress has been made in transitioning personnel to other AI tools from Google DeepMind, OpenAI or others. The military, which under President Donald Trump calls itself the War Department, said it had taken steps to ensure Anthropic could not deliberately try to sabotage its AI tools during the transition.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button