OpenAI got ‘sloppy’ about the wrong thing


You would think that OpenAI would exercise caution when drafting a deal with the Pentagon, one that would see its AI models used in life-or-death scenarios such as those we are currently seeing unfold in Iran.
But as we learned, the initial deal OpenAI reached with the Department of Defense on Friday night was a rushed job. Even CEO Sam Altman agrees.
“We shouldn’t have rushed to release this on Friday,” Altman wrote on X Monday evening, detailing recent contract changes that specifically prohibit the use of its models for surveillance of American citizens.
“The issues are extremely complex and require clear communication,” Altman continued. “We were genuinely trying to de-escalate the situation and avoid a much worse outcome, but I think it just seemed opportunistic and sloppy. It’s a good learning experience for me as we face higher-stakes decisions in the future.”
OpenAI’s rushed deal with the military of course sparked a massive backlash against the company and ChatGPT, coupled with renewed interest in Anthropic (which has since been labeled a “supply chain risk” by Defense Secretary Pete Hegseth) and its competing Claude models. Anthropic was embroiled in tense exchanges with the Defense Department over the military’s demand for nearly unlimited use of its AI technology.
I completely agree with Altman that the issues surrounding contracts between AI vendors and the military are, as he puts it, “super complex,” and yes, Friday night’s OpenAI deal did indeed seem “opportunistic and botched.”
And yes, people make mistakes and learn from them. But an AI deal with the Pentagon is as high stakes as it gets, and it’s absolutely not something to do carelessly.
I’ve reached out to OpenAI for comment and will update this story once they respond.
OpenAI’s rushed deal with the Pentagon also raises the question of what it may have handled carelessly — and that brings the discussion back to us, everyday ChatGPT users (or, increasingly, former users).
When we use AI, whether it’s ChatGPT’s models or someone else’s, we have to trust it to one degree or another. We trust Him with our names, locations, job titles, family details, and maybe even our finances. Maybe he will know who our friends are and what interests us.
This bond of trust is something AI providers need to take seriously, perhaps even at the cost of a quick deal.
Those of us who use AI every day need to take a hard look at the vendors we deal with, what they promise us, and how they behave.



