US military used Anthropic’s AI model Claude in Venezuela raid, report says | AI (artificial intelligence)

Claude, the AI model developed by Anthropic, was used by the US military during its operation to kidnap Nicolás Maduro in Venezuela, the Wall Street Journal revealed on Saturday, a high-profile example of how the US Department of Defense is using artificial intelligence in its operations.
The US raid on Venezuela involved bombings on the capital, Caracas, and the deaths of 83 people, according to the Venezuelan Defense Ministry. Anthropic’s terms of service prohibit the use of Claude for violent purposes, weapons development, or surveillance.
Anthropic was the first known AI developer to be used in a classified operation by the US Department of Defense. It’s unclear how exactly the tool, with capabilities ranging from processing PDF files to piloting autonomous drones, was deployed.
An Anthropic spokesperson declined to say whether Claude was used in the operation, but said any use of the AI tool was necessary to comply with its usage policies. The US Department of Defense has not commented on these claims.
The WSJ cited anonymous sources who said Claude was used as part of a partnership between Anthropic and Palantir Technologies, a contractor for the U.S. Department of Defense and federal law enforcement. Palantir declined to comment on these claims.
The United States and other militaries are increasingly deploying AI in their arsenals. The Israeli military has used drones with autonomous capabilities in Gaza and has made extensive use of AI to fill its targeting bank in Gaza. The US military has used AI to target strikes in Iraq and Syria in recent years.
Critics have warned against the use of AI in weapons technologies and the deployment of autonomous weapons systems, pointing to targeting errors created by computers that determine who should or should not be killed.
AI companies are questioning how their technologies should interact with the defense sector, with Anthropic CEO Dario Amodei calling for regulation to prevent harm from AI deployment. Amodei also expressed distrust of the use of AI in autonomous lethal operations and surveillance in the United States.
This more cautious stance has apparently angered the US Department of Defense, with Secretary of War Pete Hegseth saying in January that the department would not use AI models that would not allow you to wage war.
The Pentagon announced in January that it would work with xAI, owned by Elon Musk. The Department of Defense also uses a customized version of Google’s Gemini and OpenAI systems to support research.



