The Pentagon Is Going “AI First”

The US military places technology at the center of its mission, and the human costs promise to be staggering.

As President Donald Trump’s administration has rushed into military conflict with Iran, the Pentagon has gone all-in on artificial intelligence, both as a military tool in this and other possible conflicts, and as a public relations tool in the quest for an ever-larger share of your tax dollars.
The Pentagon is accelerating the use of artificial intelligence across its mission areas, touting it as a revolutionary element of America’s new military posture. The desire to apply AI as quickly as possible is behind the Trump War Department’s campaign to eliminate virtually all controls that would normally govern the introduction of new technology. This approach is presented as absolutely necessary to maintain the United States’ technological advantage over China and consolidate American military dominance, but the haste with which regulations are shelved will almost certainly lead to faulty weapons systems, exorbitant prices, reduced accountability, and an accelerated AI arms race.
For the Pentagon, 2026 is the year of AI. On January 9, Secretary of War Pete Hegseth issued a memorandum ordering the Pentagon to become an “AI-driven” warfighting institution. Three days later, Hegseth launched an “AI Acceleration Strategy,” then announced a radical overhaul of the ministry’s systems for research, development and procurement of new weapons, which would include AI. These steps will formalize a system intended to produce next-generation technology at “war speed.”
At the center of the strategy are seven “pioneering projects” or PSPs, designed to push AI into combat, intelligence, business practices and data processing functions in months rather than years. Initiatives range from AI-based decision support and battlefield simulation tools to systems aimed at converting intelligence into military action as quickly as possible. Delays, risk aversion and procedural safeguards are considered liabilities; speed is all that matters.
The new AI Acceleration Strategy will give even more power and influence to private companies by increasing the use of AI funding from venture capital firms, forming new partnerships with emerging military technology companies, and establishing open-ended contracts to ensure that military systems can integrate the latest technologies within weeks.
The shift in approach is already underway: The Army just awarded Salesforce a 10-year, $5.6 billion contract to provide AI-based systems for the so-called Department of War, which the company says will “increase mission readiness” by consolidating fragmented data sources into “one interoperable platform,” allowing warfighters to make “faster, more effective decisions.”
Current number

Taken together, the steps outlined above will further centralize decision-making within the Pentagon and remove traditional controls against shoddy work and price gouging, however inadequate our current restrictions may be. It will be speed first and other concerns be damned.
But with its emphasis on speed at the forefront, Hegseth’s Jan. 9 memo offers no real guidance on how to achieve crucial goals, such as ensuring that the laws of armed conflict are respected, or allowing time for adequate congressional oversight or coordination with allies.
By positioning AI as the foundation of U.S. military dominance in the future, the new approach reflects a tired myth that has dominated U.S. planning since World War II, an approach that equates technological progress with security. But technology alone does not win wars. And the technological “miracles” of the past, from the electronic battlefield in Vietnam to the use of networked warfare and precision-guided strike capabilities in Iraq and Afghanistan, have failed to achieve U.S. military objectives, while causing immense harm to target country civilians and U.S. combat personnel.
For example, the so-called technological miracle of the Vietnam era was described by The New York Times as follows: “General William C. Westmoreland, Army Chief of Staff, believes that new electronic technology has brought the Army to the threshold of a new battlefield concept that could be as revolutionary in warfare as the introduction of the helicopter or the tank. » In the real world, the Vietcong developed a series of relatively simple countermeasures, and new surveillance and targeting systems did not turn the tide of the war.
Even in the 1991 Gulf War, when the use of precision-guided munitions was credited with playing a central role in expelling Saddam Hussein’s invading forces from Kuwait, the story was more complicated. The coalition’s victory against Hussein’s forces had more to do with the volume of munitions dropped and the relative weakness of Iraqi air defenses than with networked warfare or precision strikes. An in-depth analysis of the air war during the 1991 conflict by what was then known as the General Accounting Office (now the Government Accountability Office) noted that “the assertion of [the Department of Defense] and contractors a single-target, single-bomb capability for laser-guided munitions was not demonstrated during the air campaign where, on average, 11 tons of guided munitions and 44 tons of unguided munitions were delivered to each successfully destroyed target.
Without firm policy guardrails, AI can amplify risk rather than reduce it, placing more emphasis on quickly achieving goals rather than why those goals were chosen in the first place. The result could be more failed wars and more needless suffering, not the vaunted revolution in American capabilities promised by Hegseth and Trump.
The Pentagon has clearly expressed its need to deploy AI for all purposes, as quickly as possible. It remains to be seen whether common-sense controls on its deployment or a realistic strategy governing its use will be part of the mix. Without a new approach to defining American interests and a better understanding of the limits of military force, rushing new technologies to the battlefield will only make the world more dangerous and less stable.
Before going all-in on AI, the U.S. government should think more carefully about the human consequences of the current, deeply counterproductive strategies and actions for which this new technology is being deployed.
More than The Nation

What the battle over a mixed-use development in a historic city reveals about liberal America.
Sophie Mann-Shafir

Yale just cut summer storage reimbursements for first-generation and low-income students. The university has a $44 billion endowment. What he chooses to budget for speaks volumes.
Student Nation
/
Zachary Clifton

Too often we narrow our scope to remote communication. We must integrate into communities to make a real difference.
Gregg Gonsalves

The city is at the forefront of fighting the use of big tech to surveil residents. But AI poses new threats.
Functionality
/
Sasha Abramski

A manifesto for an AI revolution that benefits as many people as possible, not just billionaires.
Functionality
/
Representative Ro Khanna




