U.S. military is using AI to help plan Iran air attacks, sources say, as lawmakers call for oversight


As the U.S. military increasingly uses AI tools to identify targets for airstrikes in Iran, members of Congress are calling for safeguards and greater oversight of the technology’s use in war.
Two people with knowledge of the matter, who requested anonymity to discuss sensitive matters, confirmed that the military is using AI systems from data analytics company Palantir to identify potential targets in ongoing attacks. The use of Palantir’s software, which draws in part on Anthropic’s Claude AI systems, comes as Defense Secretary Pete Hegseth aims to put artificial intelligence at the heart of U.S. combat operations — and as he has clashed with Anthropic executives over limitations on AI use.
Yet as AI plays a larger role on the battlefield, lawmakers are demanding greater attention to the protections that should govern its use and increased transparency about the degree of control ceded to the technology.
“We need a comprehensive and impartial review to determine whether AI has ever harmed or endangered lives in the war with Iran,” Rep. Jill Tokuda, D-Hawaii, a member of the House Armed Services Committee, told NBC News in response to questions about the use and reliability of AI in military contexts. “Human judgment must remain at the center of life and death decisions. »
The Department of Defense and major AI companies such as OpenAI and Anthropic have publicly stated that current AI systems should not be capable of killing without human approval. But the concern remains: relying on AI for parts of its operations or decision-making can lead to errors in military operations.
Pentagon chief spokesperson Sean Parnell said in an article on X on February 26 that the military does not “want to use AI to develop autonomous weapons that operate without human intervention.”
The Defense Department did not respond to questions about how the military balances its use of AI to reduce human workload while verifying the accuracy of analytics and targeting suggestions.
Lawmakers and independent experts who spoke to NBC News expressed concern about the military’s use of such tools, calling for clear safeguards to ensure humans remain involved in life-and-death decisions on the battlefield.
“AI tools are not 100% reliable — they can fail in subtle ways and yet operators continue to trust them too much,” said Rep. Sara Jacobs, D-Calif., a member of the House Armed Services Committee.
“We have a responsibility to enforce strict safeguards on the military’s use of AI and ensure that a human being is informed about every decision to use lethal force, because the cost of error could be devastating to the civilians and military personnel carrying out these missions,” she said.
Anthropic’s Claude became a crucial part of Palantir’s Maven intelligence analysis program, which was also used in the U.S. operation to capture Venezuelan President Nicolás. Maduro. News of Claude’s role in the recent military actions was first reported by the Wall Street Journal and the Washington Post.
But that role was complicated by conflict between Anthropic and Hegseth after the company sought to block the military from using its AI for domestic surveillance and lethal autonomous weapons. Last week, the Defense Department labeled Anthropic a national security threat, a move that threatens to remove it from military use in the coming months. Anthropic filed a lawsuit to fight the designation.
Anthropic declined to comment. Palantir did not respond to a request for comment.
In a video posted to X on Wednesday, Admiral Brad Cooper, head of US Central Command, acknowledged that AI had become a key tool in helping the US choose targets in Iran.
“Our warfighters leverage a variety of advanced AI tools. These systems help us sift through large amounts of data in seconds so our leaders can stand out and make smarter decisions faster than the enemy can react,” he said.
“Humans will still make the final decisions about what to film, what not to film, and when to film, but advanced AI tools can transform processes that used to take hours, or even days, into seconds.”
The Trump administration has publicly embraced this technology, both for the military and across government.
Rep. Pat Harrigan, R-N.C., said AI has already become crucial to the rapid processing of military intelligence, including in Iran.
“AI is a tool that helps our warfighters process enormous amounts of data faster than any human could alone, and what we saw during Operation Epic Fury, more than 2,000 targets struck with remarkable precision, is a testament to how these capabilities can be used responsibly and effectively,” Harrigan, who also serves on the House Armed Services Committee, said in a statement to NBC News.
“But no AI system replaces the judgment, training and experience of the American warfighter. Human participation is not a formality, it is a requirement, and nothing in the way our military operates suggests otherwise,” he said.
Although no lawmakers contacted by NBC News said AI should be completely removed from military use, some said increased oversight is needed.
Sen. Elissa Slotkin, Democrat of Michigan, a member of the Senate Armed Services Committee, said the Defense Department has not done enough to clarify the extent to which humans control AI-assisted or generated military intelligence.
“It’s really up to humans, and in this case the Secretary of Defense, to ensure that there is human redundancy in the near future, and that’s what we just don’t have confidence in,” she said.
Sen. Mark Warner, D-Va., the top Democrat on the Senate Intelligence Committee, said he was concerned about the military’s use of AI to help identify targets and that questions remained unanswered about how the new technology was used. “This problem needs to be resolved,” he told NBC News.
OpenAI and Anthropic, both of which have worked with the US military, have said that even their most advanced systems are error-prone, and the world’s top AI researchers admit they don’t fully understand how major AI systems work.
In an interview with NBC last month, Anthropic CEO Dario Amodei said, “I can’t tell you there’s a 100 percent chance that even the systems we build will be completely reliable.” »
A major OpenAI study published in September found that all major AI chatbots, which rely on systems called large language models, periodically “hallucinate” or fabricate responses.
Sen. Kirsten Gillibrand, D-N.Y., called for clearer rules on how the military can use AI.
“The Trump administration has already proven that it is willing to bend U.S. law to fight an unpopular war,” she told NBC News. “There is little reason to believe that DOD will be more responsible in its use of AI without explicit safeguards.”
Mark Beall, head of government affairs at the AI Policy Network, a Washington DC think tank, and director of AI strategy and policy at the Pentagon from 2018 to 2020, said that while AI could streamline the process of deciding where to strike, it was clear that humans still needed to scrutinize targets.
“There are many steps before the trigger is pulled. AI systems are being deployed very effectively to accelerate existing workflows and enable better and faster decision-making capabilities for commanders, analysts and planners,” he added. “But when it comes to actually deploying weapon systems, this technology is not ready yet.”
“These systems will become really very capable, and as other adversaries begin to use them, there will be more and more pressure to shortcut the review of AI results in order to operate at useful and efficient speeds,” Beall said. “We need to figure out how to solve this reliability problem before we get there. No matter what you think about lethal autonomous weapons, making them safe and effective is in the entire world’s interest.”
Heidy Khlaaf, chief scientist at the AI Now Institute, a nonprofit that advocates for the ethical use of technology, said she worries that relying on AI to quickly process information needed for life-or-death decisions could be a way for the military to avoid accountability for its mistakes.
“It’s very dangerous that ‘speed’ is presented to us as strategic here, when in reality it serves as a cover for indiscriminate targeting, considering the inaccuracy of these models,” Khlaaf said.




