Employees across OpenAI and Google support Anthropic’s lawsuit against the Pentagon

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

On Monday, Anthropic filed a lawsuit against the Department of Defense for being designated a supply chain risk. Hours later, nearly 40 OpenAI and Google employees, including Jeff Dean, Google’s chief scientist and head of Gemini, filed an amicus brief in support of Anthropic’s lawsuit, detailing their concerns about the Trump administration’s decision and the risks and implications of the technology.

This news follows a dramatic few weeks for Anthropic, during which the Trump administration labeled the company a supply chain risk — a designation typically reserved for foreign companies that the government considers a potential national security risk in one way or another — after Anthropic stood firm on two red lines regarding acceptable use cases for military use of its technology: domestic mass surveillance and fully autonomous weapons (or AI systems with the power to kill without human involvement). Negotiations failed, followed by public insults and other AI companies stepping in to sign contracts allowing “any lawful use” of their technology.

The supply chain risk designation not only prevents Anthropic from working on military contracts, but it also blacklists other companies if they used Anthropic products in their work for the Pentagon, forcing them to uproot Claude if they wanted to maintain their lucrative contracts. However, as the first authorized model for classified intelligence, Anthropic’s tools are already deeply integrated into the Pentagon’s work — so much so that just hours after Defense Secretary Pete Hegseth announced the designation, the U.S. military reportedly used Claude in the campaign that killed Iranian leader Ayatollah Ali Khamenei.

The amicus brief seeks to argue that Anthropic’s supply chain risk designation “constitutes an inappropriate retaliation that harms the public interest” and that the concerns behind Anthropic’s red lines “are real and require a response.” He also points out that Anthropic’s two red lines are worth revisiting, saying that “AI-powered mass domestic surveillance poses profound risks to democratic governance – even in responsible hands” and that “fully autonomous lethal weapons systems present risks that must also be addressed.”

The group behind the amicus brief describes itself as “engineers, researchers, scientists, and other professionals employed in U.S. artificial intelligence laboratories.”

“We build, train, and study large-scale AI systems that serve a wide range of users and deployments, including in the consequential areas of national security, law enforcement, and military operations,” the group wrote. “We submit this brief not as a spokesperson for a single company, but in our individual capacity as professionals with first-hand knowledge of what these systems can and cannot do, and what is at stake when their deployment exceeds the legal and ethical frameworks designed to govern them. »

On the domestic mass surveillance front, the group said that while data on U.S. citizens exists everywhere in the form of surveillance cameras, geolocation data, social media posts, financial transactions, and more, “what doesn’t yet exist is the AI ​​layer that transforms this sprawling, fragmented data landscape into a unified, real-time surveillance apparatus.” Currently, they write, these data streams are siled, but if AI were used to connect them, it could combine “facial recognition data with the location history, transaction records, social graphs, and behavior patterns of hundreds of millions of people simultaneously.”

Looking specifically at lethal autonomous weapons, the group said they may be unreliable in novel or unclear conditions that don’t match the environment in which they were trained — meaning they cannot be trusted to identify targets with perfect accuracy, and they are unable to make the subtle contextual tradeoffs between achieving a goal and taking into account the collateral effects a human can make. Additionally, the group writes, the hallucinatory potential of lethal autonomous weapons systems means it is important for humans to be involved in the decision-making process “before a lethal munition is launched at a human target” — especially since the system’s chain of reasoning is often not accessible to operators and is unclear even to the system’s developers.

The group behind the amicus brief wrote: “We are diverse in our policies and philosophies, but we are united in the belief that today’s AI systems pose risks when deployed to enable national mass surveillance or the operation of lethal autonomous weapons systems without human oversight, and that these risks require some sort of safeguards, whether through technical safeguards or restrictions on use.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button