OpenAI and Google Workers File Amicus Brief in Support of Anthropic Against the US Government

More than 30 OpenAI and Google employees, including Google DeepMind chief scientist Jeff Dean, filed an amicus brief Monday to support Anthropic in its legal fight against the U.S. government.
“If allowed, this effort to punish one of America’s leading AI companies will undoubtedly have consequences for America’s industrial and scientific competitiveness in artificial intelligence and beyond,” the staffers wrote.
The filing was filed just hours after Anthropic sued the Department of Defense and other federal agencies over the Pentagon’s decision to designate the company a “supply chain risk.” The sanction, which significantly limits Anthropic’s ability to work with military contractors, took effect after Anthropic’s negotiations with the Pentagon failed. The AI startup is seeking a temporary restraining order to continue its work with military partners as the lawsuit progresses. This brief specifically supports this motion.
Signatories to the brief include Google DeepMind researchers Zhengdong Wang, Alexander Matt Turner, and Noah Siegel, as well as OpenAI researchers Gabriel Wu, Pamela Mishkin, and Roman Novak, among others. Amicus briefs are legal documents submitted by parties who are not directly involved in a legal case but who have relevant expertise. The employees signed in their personal capacity and do not represent the views of their company, according to the brief.
OpenAI and Google did not immediately respond to WIRED’s request for comment.
The amicus brief says the Pentagon’s decision to blacklist Anthropic “introduces unpredictability into [their] an industry that undermines American innovation and competitiveness” and “chills professional debate about the benefits and risks of border AI systems.” He notes that the Pentagon could have simply abandoned the Anthropic contract if it no longer wished to be bound by its terms.
The brief also states that the red lines Anthropic claims to have requested, including that its AI would not be used for mass domestic surveillance and the development of lethal autonomous weapons, are legitimate concerns and require sufficient safeguards. “In the absence of public law, the contractual and technological requirements that AI developers impose regarding the use of their systems represent a vital safeguard against their catastrophic misuse,” the document states.
Several other AI leaders have also publicly questioned the Pentagon’s decision to label Anthropic a supply chain risk. OpenAI CEO Sam Altman said in a social media post that “applying SCR [supply-chain risk] a designation on Anthropic would be very bad for our industry and our country. He added that “this is a very bad decision on the part of the DoW and I hope they reverse it”. As Anthropic’s relationship with the Pentagon deteriorated, OpenAI quickly signed its own contract with the US military, a move criticized by some as opportunistic.



