OpenAI will amend Defense Department deal to prevent mass surveillance in the US

OpenAI’s Sam Altman said the company would amend its agreement with the Department of Defense (or Department of War) to explicitly prohibit the use of its AI system for mass surveillance against Americans. Altman released an internal memo previously sent to employees at X, telling them the company would amend the agreement to add language to make this point particularly clear. More precisely, it says:
“Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, the National Security Act of 1947, and the FISA Act of 1978, the AI system may not be used intentionally for domestic surveillance of U.S. persons or nationals.
For the avoidance of doubt, the Department understands this limitation to prohibit the deliberate tracking, surveillance, or control of persons or U.S. persons, including through obtaining or using commercially acquired personal or identifiable information.
Altman also claimed in the memo that the agency had claimed its services would not be used by its intelligence agencies, including the NSA, without changes to their contract. He added that if he received what he believed to be an unconstitutional order, he would rather go to prison than follow it.
Additionally, OpenAI’s CEO admitted in the note that the company should not have rushed to close the deal on Friday, February 27, as the issues were “extremely complex and required clear communication.” Altman said the company was “trying to de-escalate the situation and avoid a much worse outcome” but ultimately “came across as opportunistic.” If you recall, OpenAI announced the partnership shortly after President Trump ordered all US government agencies to stop using Claude and any other Anthropic services. Note that Anthropic began working with the American government in 2024.
The Department of Defense and Secretary of State Pete Hegseth have been pressuring Anthropic to remove guardrails from its AI so that it can be used for any “lawful” purpose. These include mass surveillance and the development of fully autonomous weapons. Anthropic refused to comply with Hegseth’s demands and said in a statement that “no amount of intimidation or punishment” would change its “stance on mass domestic surveillance or fully autonomous weapons.” As a result, Trump issued this order. The Defense Department had also taken initial steps to designate Anthropic as a “supply chain risk,” typically reserved for Chinese companies suspected of working with their country’s government.
Altman said that in his conversations with U.S. officials, he reiterated that Anthropic should not be designated a supply chain risk and that he hoped the Department of Defense would offer him the same deal that OpenAI had agreed to. During an AMA session on But if it had been the same, he thought Anthropic should have accepted it.
After the OpenAI deal was announced, Anthropic rose to the top spot in the App Store’s top free apps ranking, beating ChatGPT and Google Gemini. Anthropic, capitalizing on Claude’s sudden popularity, launched a memory import tool to make it easier to switch to its chatbot from another company’s. Meanwhile, ChatGPT uninstalls jumped 295% day-over-day, according to Sensor Tower.



