Anthropic and Donald Trump’s Dangerous Alignment Problem

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

In the new year, Musk welcomed Hegseth to a meeting at SpaceX headquarters, where Hegseth unveiled a new partnership with Grok, who recently spent much of his time removing clothes from women and children in photographs. The Pentagon, Hegseth said, “will not employ AI models that will not allow you to fight wars.” Semafor reported that this was a specific vaccine from Anthropic. Shortly afterward, according to the government’s account, an administration official received a phone call from a contact at Palantir. An Anthropic employee, the official claimed, was asking curious questions about Claude’s alleged role in the recent military raid that captured Venezuelan President Nicolás Maduro. This investigation was not seen as idle curiosity but as an act of insubordination. (Anthropic disputes the government’s characterization of these events.)

If the Pentagon wasn’t going to tolerate questions, it certainly wasn’t in the business of being told what to do. According to a senior administration official close to the negotiations, Michael asked Amodei what would happen if an upgraded version of Claude and its (currently theoretical) ballistic missile capabilities — identifying, acquiring and neutralizing incoming attacks — were the only thing standing between the country and a barrage of Chinese hypersonic missiles. The plausibility of this hypothetical scenario left something to be desired: our precision missile defense systems were probably a safer bet than a large linguistic model with irregular capabilities. (LLMs have historically proven incapable of counting the number of “R’s” in the word “strawberry.”) In the government’s account, which Anthropic strenuously denies, Amodei assured Pentagon officials that in such a scenario he was personally prepared to respond to customer service inquiries over the phone. The senior official said to me, “What do you mean? We have about ninety seconds left!”

Any remaining goodwill between the Pentagon and Anthropic quickly deteriorated completely. On February 14, Anthropic learned that refusal of the government’s requests could result in cancellation of the contract. The next day, Laura Loomer, a right-wing activist, tweeted a scoop: according to an anonymous War Department source, “many senior DoW officials are starting to view them as a supply chain risk and we may require all of our suppliers and contractors to certify that they are not using any anthropogenic models.” Such a distinction only applied to infrastructure companies, such as Huawei or Kaspersky Labs, with ties to opposing foreign governments, and there was no domestic precedent. It is also unclear whether the government’s threat to designate Anthropic as a supply chain risk was narrow or broad. The first, which would prohibit defense contractors from using Claude in their government workflows, was annoying for Anthropic, but bearable. The latter, which would prohibit any company doing business with the government from using Claude, would stifle the company.

The Pentagon has set a deadline of 5:01 a.m. PM on Friday, February 27, for Anthropic to get in line. The consequences of demurrage remained unclear. It could declare the company a supply chain risk, or it could invoke the Defense Production Act, leading to partial or full nationalization of the company. This was patently inconsistent: Claude was both a vital national asset and so dangerous that he merited quarantine. On Thursday, the day before the deadline, Amodei issued a statement refusing to cross the remaining red lines. Hours later, Michael tweeted that Amodei was a “liar” with a “god complex.”

The two parties nevertheless moved closer to an agreement. On Friday morning, the Pentagon agreed to remove what Anthropic negotiators considered crazy words in a clause on autonomous weapons — legal phrases like “where appropriate” that can effectively replace compensatory contract language. The final point of contention was surveillance. Anthropic was happy to allow Claude to monitor individuals under the jurisdiction of a FISA court, a secret tribunal that oversees requests for surveillance warrants involving foreign powers or their agents on domestic soil. This deployment of Claude would be subject to national security laws rather than ordinary commercial or civil laws. What was important to Anthropic was the guarantee that Claude would have nothing to do with the analysis of the big data collected at the national level, an issue particularly important to its employees in the context of ICE raids. The Pentagon’s position was that all this petty horse-trading was moot. Domestic mass surveillance was illegal, he said, and the Defense Department hadn’t even done it.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button