Anthropic is looking for a weapons and explosives expert. Here’s why

Many people first saw it on X: a most unusual and disturbing job offer. Some people thought it was a joke. Others remembered Cyberdyne Systems, the technology company of the Terminator franchise that accidentally invents Skynet.
This Tweet is currently unavailable. It may be loading or has been deleted.
But on LinkedIn, where they speak a different language, Anthropic simply posted a listing looking for a chemical weapons and high-yield explosives policy manager. The job description added more details.
“This role provides a unique opportunity to shape how AI systems process sensitive information about chemicals and explosives,” it reads. “You will work with leading AI security researchers while tackling critical issues related to preventing catastrophic misuse. If you’re excited about using your expertise to ensure AI systems remain safe and beneficial, we want to hear from you. »
Mashable reached out to Anthropic and the company provided more context.
“Our usage policies prohibit the use of Anthropic products or services to develop or design weapons,” a company spokesperson told us. “This role falls to the safeguarding team who are responsible for preventing any misuse of our models.”
Crushable speed of light
The spokesperson stressed that Anthropic explicitly prohibits its AI or any other technology from being used to create weapons. Instead, the New York-based official will be responsible for putting in place and enforcing safeguards to ensure the weapons are not made using Anthropic technology.
The company is looking to hire experts in sensitive areas who can ensure Anthropic’s AI remains safe from nefarious hands, the spokesperson said.
Anthropic’s Claude overtakes ChatGPT in the App Store
Anthropic recently found itself in a very public battle with the Department of War (aka the Department of Defense). The company says it is not budging on its demands that its AI not be used to make fully autonomous weapons or to establish mass surveillance of people.
Defense Secretary Pete Hegseth responded to Anthropic’s conditions by saying the company posed a supply chain risk to U.S. national security, barring the Pentagon from using its technology after a six-month withdrawal period. The company then filed a lawsuit, according to a March 5 memo from Anthropic CEO Dario Amodei.
Meanwhile, some at the Pentagon would have a hard time abandoning Claude, Anthropic’s AI model.
Last February, Anthropic announced an update to its AI Safety Policy, also known as the Responsible Scaling Policy. The company said it was forced to rethink its safety policies — considered by some to be the strongest in the industry — because of several factors, including the federal government’s focus on economic growth over safety regulations.
Whoever finds themselves in this role of political leader will then find themselves at the center of an explosive debate. Not to mention, potentially, the ability to help prevent a future Skynet threat.




