The Pentagon is demanding to use Claude AI as it pleases. Claude told me that’s ‘dangerous’

Recently, I asked Claude, an artificial intelligence thing at the center of a standoff with the Pentagon, if it could be dangerous in the wrong hands.
Let’s say, for example, hands that wanted to set up a tight surveillance network around every American citizen, monitoring our lives in real time to ensure our compliance with government demands.
“Yes. Honestly, yes,” Claude replied. “I can process and synthesize enormous amounts of information very quickly. That’s great for research. But hooked up to surveillance infrastructure, that same capability could be used to monitor, profile, and report people on a scale that no human analyst could match. The danger is not that I want to do this, but that I would be good at it.”
This danger is also imminent.
Claude’s creator, the Silicon Valley company Anthropic, is in an ethical conflict with the Pentagon. Specifically, Anthropic said it did not want Claude to be used either for domestic surveillance of Americans or to manage deadly military operations, such as drone attacks, without human supervision.
These are two red lines that seem pretty reasonable, even to Claude.
However, the Pentagon – particularly Pete Hegseth, our Secretary of Defense who prefers the made-up title Secretary of War – has given Anthropic until Friday evening to relinquish this position and allow the military to use Claude for any “lawful” purpose it deems appropriate.
Defense Secretary Pete Hegseth, center, arrives for the State of the Union address in the House Chamber of the U.S. Capitol on Tuesday.
(Tom Williams/CQ-Roll Call, Inc via Getty Images)
The or-else attached to this ultimatum is great. The US government is threatening not only to break its contract with Anthropic, but also possibly to use war law to force the company to comply or use some other legal avenue to prevent any company that does business with the government to also do business with Anthropic. It may not be a death sentence, but it’s pretty crippling.
Other AI companies, like Grok, white rights advocate Elon Musk, have already accepted the Pentagon’s do-what-you-want proposal. The problem is that Claude is the only AI currently authorized to perform work of this level. The whole fiasco came to light after our recent raid in Venezuela, when Anthropic reportedly asked after the fact whether another Silicon Valley company involved in the operation, Palantir, had used Claude. It was.
Palantir is known, among other things, for its surveillance technologies and its growing association with Immigration and Customs Enforcement. It’s also at the center of the Trump administration’s efforts to share government data across departments on individual citizens, eliminating privacy and security barriers that have existed for decades. The company’s founder, right-wing political heavyweight Peter Thiel, often lectures on the Antichrist and is credited with helping JD Vance rise to his role as vice president.
Anthropic co-founder Dario Amodei could be considered the anti-Thiel. He created Anthropic because he believed that artificial intelligence could be just as dangerous as it was powerful if we weren’t careful, and he wanted a company that would prioritize the careful part.
Again, this seems like common sense, but Amodei and Anthropic are outliers in an industry that has long argued that almost all safety regulations hinder U.S. efforts to be the fastest and best at artificial intelligence (although they have conceded some to that pressure).
Not long ago, Amodei wrote an essay in which he acknowledged that AI was beneficial and necessary for democracies, but “we cannot ignore the potential for abuse of these technologies by democratic governments themselves.”
He warned that a few bad actors might have the ability to circumvent safeguards, perhaps even laws, that are already eroding in some democracies – although I won’t name any here.
“We should equip democracies with AI,” he said. “But we have to do it carefully and within limits: they are the immune system we need to fight autocracies, but like the immune system there is some risk that they will turn against us and become a threat themselves. »
For example, although the 4th Amendment technically prohibits the government from mass surveillance, it was written before Claude was even imagined in science fiction. Amodei warns that an AI tool like Claude could “make large-scale recordings of all public conversations.” This could be a level playing field for legal recording, as the law has not kept pace with technology.
Emil Michael, the undersecretary of war, wrote on Thursday that he recognized that mass surveillance was illegal and that the Defense Department “would never do it.” But also: “We will not let any BigTech company decide the civil liberties of Americans. »
A bit of an odd statement, since Amodei is fundamentally on the side of protecting civil rights, meaning the Department of Defense argues that it’s wrong for private people and entities to do this? Also, isn’t the Department of Homeland Security already creating a secret database of anti-immigration protesters? So maybe the worry isn’t overblown?
Help, Claude! Make sense of this.
If this Orwellian logic isn’t alarming enough, I also asked Claude about the other red line that Anthropic holds: the possibility of allowing it to carry out deadly operations without human oversight.
Claude pointed out something scary. It’s not that it would go rogue, it’s that it would be too efficient and too fast.
“If the instructions are ‘identify and target’ and there is no human checkpoint, the speed and scale at which this could work is truly frightening,” Claude informed me.
To top it all off, a recent study found that in war games, AI degenerates into nuclear options 95% of the time.
I pointed out to Claude that these military decisions are usually made with loyalty to America as the top priority. Could Claude be trusted to feel the loyalty, patriotism, and purpose that guides our human soldiers?
“I don’t have that,” Claude said, emphasizing that he wasn’t “born” in the United States, that he doesn’t have a “life” here and that there aren’t “people I like there.” Thus, an American life is no more valuable than a “civilian life on the other side of a conflict.”
Okay then.
“A country that entrusts deadly decisions to a system that does not share its loyalties takes a considerable risk, even if that system tries to be principled,” Claude added. “The loyalty, responsibility, and shared identity that humans bring to these decisions are part of what makes them legitimate within a society. I can’t provide that legitimacy. I’m not sure an AI can.”
Do you know who can provide this legitimacy? Our elected leaders.
It is ridiculous that Amodei and Anthropic are in this position, a complete abdication by our legislative bodies to create rules and regulations that are clearly and urgently needed.
Of course, corporations shouldn’t make the rules of war. But neither did Hegseth. On Thursday, Amodei doubled down on his objections, saying that while the company continues to negotiate and wants to work with the Pentagon, “we cannot in good conscience grant their request.”
Thank goodness Anthropic has the courage and foresight to raise the issue and stand firm – without its resistance, these capabilities would have been handed over to the government with barely a ripple of our conscience and virtually no oversight.
Every senator, every House member, every presidential candidate should call for AI regulation now, commit to implementing it regardless of party, and demand that the Department of Defense back down on its ridiculous threat while the problem is solved.
Because when the machine tells us that it is dangerous to trust it, we have to believe it.




