Pentagon inks deal with Google for AI services

The Pentagon and Google have reached a deal for the Defense Department to use the tech company’s powerful Gemini AI systems on classified networks, according to a U.S. official familiar with the matter.
Subscribe to read this story ad-free
Get unlimited access to ad-free articles and exclusive content.
The official spoke on condition of anonymity because he was not authorized to disclose details of the deal. The exact content and details of the new contract remain unclear.
The deal follows similar agreements with other major AI companies, including OpenAI and xAI. Defense Secretary Pete Hegseth has made the adoption of AI a top priority for the armed forces, pledging to transform the military into “a premier warfighting force.”
A Google spokesperson did not respond to specific questions about the deal, which was first reported by tech media outlet The Information.
“We are proud to be part of a broad consortium of leading AI labs and technology and cloud companies providing AI services and infrastructure in support of national security,” Google spokesperson Kate Dreyer said in an email to NBC News. “We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight. »
The Defense Department has embraced AI over the past decade, using automated systems for everything from analyzing drone imagery in the fight against the Islamic State group to streamlining logistics and eliminating soldier pay gaps. It currently uses AI to analyze intelligence and provide targeting support in the war against Iran.
Michael Horowitz, a former senior defense official and current professor at the University of Pennsylvania, said the deal “to use Google’s AI models for classified purposes illustrates the growing importance of AI to U.S. national security.”
However, Horowitz noted that Google’s AI systems were already being used on unclassified systems, so it’s “not surprising that they came to an agreement on classified uses.”
In recent months, the Pentagon has sought to negotiate new contracts with the four largest US AI companies to include provisions allowing “any lawful use” of their AI systems. The Pentagon announced the first exploratory contracts with Google, OpenAI, Anthropic and xAI in July.
These moves have sparked some controversy, particularly with Anthropic. The company, led by CEO Dario Amodei, has asked the Pentagon for stronger assurances that it would not use its AI models for domestic mass surveillance or direct control of lethal autonomous weapons.
It’s unclear whether Google sought such guarantees. The U.S. official who spoke to NBC News said the agreement with Google covered lawful use by the Department of Defense.
FORSUBSCRIBERS

The massive security threat of AI – and what’s at risk
04:49
Although Google avoided a public spat with the Pentagon, it faced some resistance from its own employees. On Monday, Bloomberg News reported that about 600 Google employees sent a letter to CEO Sundar Pichai urging him to turn down new AI partnerships with the Pentagon.
This isn’t the first time Google has faced unrest among its employees over its work with the military. In 2018, thousands of Google employees protested the company’s role in a secret Pentagon program called Project Maven. Operated in partnership with data analytics company Palantir, Maven remains one of the Department of Defense’s premier AI programs.
Google has decided not to renew Project Maven’s contract following opposition from employees. Pichai said at the time that the company would not pursue any application of AI “for surveillance that violates internationally accepted standards” or for weapons whose primary purpose “is to directly cause or facilitate injury to people.”
The government’s use of AI in domestic surveillance and direct control of lethal automatic weapons has received significant attention, both within the AI industry and among civil society groups, although that has not slowed government adoption or moves by tech giants to sign deals.
These concerns became a public controversy earlier this year. In a late February blog post outlining these two red lines, Amodei, Anthropic’s CEO, wrote that “in a limited number of cases, we believe AI can undermine, rather than uphold, democratic values. Some uses are also simply outside the bounds of what current technology can do safely and reliably.”
After issuing an ultimatum to Anthropic to comply with the Pentagon’s wishes to allow the use of AI for lawful purposes — which could exceed Anthropic’s accepted scope of use — Hegseth declared Anthropic a “national security supply chain risk,” a designation usually reserved for foreign adversaries. The Defense Ministry said it would look to reduce its use of Anthropic models in the coming months.
President Donald Trump also announced in late February that he would ban all federal agencies from using Anthropic’s products, calling Anthropic a group of “left-wing weirdos.”
Anthropic is suing the Department of Defense and relevant federal agencies to overturn the executive orders. The case is split between California, where a judge ordered a preliminary halt to the unloading of Anthropic systems, and Washington, D.C., where the court chose not to issue a similar injunction.
Shortly after Anthropic was labeled a national security threat, OpenAI announced it had reached a similar agreement with the Pentagon to integrate its AI models into the Defense Department’s classified networks. However, the announcement sparked public outcry over the perceived lack of safeguards around the Pentagon’s potential use of OpenAI’s systems, particularly as it relates to surveillance of Americans.
As a result, OpenAI and CEO Sam Altman reworked the agreement’s language a few days later, with the updated agreement specifying that no OpenAI service “shall be used intentionally for domestic surveillance of U.S. persons or nationals.”
Brian McGrail, senior counsel at the Center for AI Safety, said in March that intelligence and national security agencies often adopt very liberal interpretations of contract provisions regarding surveillance. McGrail said because these contracts remain private, it is often difficult to judge the strength of domestic surveillance bans.



