Military AI Governance: Who Sets the Rules?

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

A simmering dispute between the U.S. Department of Defense (DOD) and Anthropic has blossomed into a full-blown confrontation, raising an uncomfortable but important question: Who gets to put in place the guardrails for the military use of artificial intelligence – the executive branch, private companies, or Congress and the broader democratic process?

The conflict began when Defense Secretary Pete Hegseth reportedly gave Anthropic CEO Dario Amodei a deadline to allow the Department of Defense unrestricted use of its AI systems. When the company refused, the administration moved to designate Anthropic as a supply chain risk and ordered federal agencies to phase out its technology, significantly worsening the standoff.

Anthropic refused to cross two lines: allowing the use of its models for domestic surveillance of American citizens and allowing fully autonomous military targeting. Hegseth opposed what he described as “ideological constraints” built into commercial AI systems, arguing that determining legal military use should be the responsibility of the government, not the vendor. As he said in a speech to Elon Musk’s SpaceX last month: “We will not use AI models that will not allow you to wage war.” »

Stripped of all rhetoric, this dispute looks like something relatively simple: a disagreement over a public contract.

Procurement Policies

In a market economy, the U.S. military decides what products and services it wants to purchase. Companies decide what they are willing to sell and on what terms. Neither side is fundamentally right or wrong in taking a position. If a product does not meet operational needs, the government can purchase it from another supplier. If a company believes that certain uses of its technology are unsafe, premature, or inconsistent with its values ​​or risk tolerance, it may refuse to provide them. For example, a coalition of companies signed an open letter pledging not to weaponize general-purpose robots. This fundamental symmetry is a characteristic of the free market.

Where the situation becomes more complicated – and more troubling – is in the decision to designate Anthropic as a “supply chain risk.” This tool exists to address real national security vulnerabilities, such as foreign adversaries. There are no plans to blacklist any American company that rejects government-favored contract terms.

Using this power in this way marks a significant shift: from disagreement over procurement to the use of coercive leverage. Hegseth said that “effective immediately, no contractor, supplier or partner that does business with the U.S. military may conduct any commercial activity with Anthropic.” This action will almost certainly face legal challenges, but it raises the stakes well beyond the loss of a single DOD contract.

AI governance

It is also important to distinguish between the two substantive issues that Anthropic allegedly raised.

The first, opposition to domestic surveillance of American citizens, touches on well-established civil liberties concerns. The U.S. government operates under constitutional constraints and statutory limits when it comes to surveillance of Americans. A company that says it doesn’t want its tools used to facilitate domestic surveillance isn’t inventing a new principle; it aligns with long-standing democratic safeguards.

To be clear, DOD is not affirmatively stating that it intends to use this technology to illegally surveil Americans. Its position is that it does not wish to acquire models with built-in restrictions that would prevent otherwise legal government use. In other words, the Department of Defense argues that compliance with the law is the government’s responsibility – not an obligation that must be baked into the vendor’s code.

Anthropic, for its part, has invested heavily in training its systems to refuse certain categories of harmful or high-risk tasks, including surveillance assistance. The disagreement is therefore less about current intention than about institutional control over constraints: whether these should be imposed by the state through law and monitoring, or by the developer through technical design.

The second question, opposition to fully autonomous military targeting, is more complex.

The Department of Defense already maintains policies requiring human judgment in the use of force, and debates about the autonomy of weapons systems are ongoing within military and international forums. A private company may reasonably determine that its current technology is not sufficiently reliable or controllable for certain battlefield applications. At the same time, the military may conclude that such capabilities are necessary for deterrence and operational effectiveness.

Reasonable people can disagree about where these lines should be drawn.

But this disagreement underscores a deeper point: The limits of military use of AI should not be set through ad hoc negotiations between a cabinet secretary and a CEO. Nor should they be determined by which party can exercise the greatest contractual leverage.

If the U.S. government believes that certain AI capabilities are essential to national defense, that position should be expressed openly. This issue should be debated in Congress and reflected in doctrine, oversight mechanisms, and statutory frameworks. The rules must be clear, not only for businesses, but also for the public.

The United States often distinguishes itself from authoritarian regimes by emphasizing that power operates within transparent democratic institutions and legal constraints. This distinction carries less weight if AI governance is determined primarily by executive ultimatums issued behind closed doors.

There is also a strategic dimension. If companies conclude that participation in federal markets requires waiving all deployment requirements, some could leave these markets. Others might respond by weakening or removing the model’s safeguards to remain eligible for public procurement. None of these results reinforce American technological leadership.

The Defense Department is correct that it cannot allow potential “ideological constraints” to undermine lawful military operations. But there is a difference between rejecting arbitrary restrictions and rejecting any role for enterprise risk management in shaping deployment conditions. In high-risk areas – from aerospace to cybersecurity – contractors routinely impose security standards, testing requirements and operational limits as part of responsible marketing. AI should not be considered exclusively exempt from this practice.

Furthermore, built-in safeguards should not be seen as obstacles to military effectiveness. In many high-risk industries, multi-level oversight is common practice: internal controls, technical safeguards, audit mechanisms and legal review work together. Technical constraints can serve as an additional safety net, reducing the risk of misuse, error, or unintended escalation.

Congress is absent

DOD should retain ultimate authority over lawful use. But it should not dismiss the possibility that some safeguards built into the design could complement its own surveillance structures rather than weaken them. In some contexts, security system redundancy enhances, not weakens, operational integrity.

At the same time, a company’s unilateral ethical commitments cannot replace public policy. When technologies have national security implications, private governance has inherent limits. Ultimately, decisions about surveillance authorities, autonomous weapons, and rules of engagement belong to democratic institutions.

This episode illustrates a pivotal moment in AI governance. AI systems on the technology frontier are now powerful enough to influence intelligence analysis, logistics, cyber operations and potentially decision-making on the battlefield. This makes them too consequential to be governed solely by company policy – ​​and too consequential to be governed solely by executive discretion.

The solution is not to give power to one side at the expense of the other. It’s about strengthening the institutions that mediate between them.

Congress should clarify statutory limits on the use of military AI and determine whether sufficient oversight exists. DOD should articulate a detailed doctrine for human oversight, audit, and accountability. Civil society and industry should participate in structured consultation processes rather than episodic standoffs, and public procurement policy should reflect these publicly established standards.

If AI guardrails can be removed through contractual pressure, they will be treated as negotiable. However, if grounded in law, they can become stable expectations.

Democratic constraints on military AI are a matter of statute and doctrine – not private contractual negotiations.

This article is adapted by the author with permission from Technology Policy Press. Read the original article.

From the articles on your site

Related articles on the web

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also
Close
Back to top button