How AI safety took a backseat to military money

Hey there, and welcome to Decoder! I am Hayden Field, repot it a senior to The penis – And your guest host of the Thursday episode. I have another couple of shows for you while Nilay is on parental leave, and we will spend more time diving into some of the unexpected consequences of the AI arrow.
Today, I speak with Heidy Khlaaf, who is the chief scientist of AI at the AI Now Institute and one of the main experts in the IA security in autonomous weapons systems. Heidy actually worked with Openai in the past; From the end of 2020 to mid-2021, she was a senior engineer in the company security for the company for a critical period, when she developed safety and risk assessment frameworks for the corporate codex coding tool.
Now, the same companies that have already seemed to defend security and ethics in their mission statements are now actively sell and develop new technologies for military applications.
In 2024, Openai eliminated the prohibition of cases of “military and war” use of its service conditions. Since then, the company has signed an agreement with the manufacturer of autonomous weapons Andendil and, last June, signed a contract of the Ministry of Defense of $ 200 million.
Openai is not alone. Anthropic, which has the reputation of one of the most security -oriented AI laboratories, has teamed up with Palant to allow its models to be used for American defense and intelligence purposes, and it also won its own DOD contract of $ 200 million. And major technological players like Amazon, Google and Microsoft, who have worked with the government for a long time, also push AI products for defense and intelligence, despite the growing outcry of criticism and employee activists.
So I wanted to have Heidy in the series to guide me through this major change in the AI industry, which motivates it and why she thinks that some of the main AI companies are far too rider to deploy a generative AI in high -risk scenarios. I also wanted to know what this push to deploy military quality means means that bad players who might want to use AI systems to develop chemical, biological, radiological and nuclear weapons – a risk that IA companies themselves say more and more worried.
Okay, here is Heidi Khlaaf on AI in the army. Here we go.
If you want to know more about what we talked about in this episode, see the links below:
Questions or comments about this episode? Hit us to decoder@theverge.com. We really read each email!


