The Former Staffer Calling Out OpenAI’s Erotica Claims

When the story of AI is written, Steven Adler may end up being his Paul Revere – or at least one of them – when it comes to security.
Last month, Adler, who spent four years in various security roles at OpenAI, wrote an article for the New York Times with a rather alarming headline: “I led product security at OpenAI. Don’t trust his claims about eroticism.” “No one wanted to be the morality police, but we lacked ways to carefully measure and manage erotic use,” he writes. “We decided that AI-powered erotica would have to wait.”
Adler wrote his op-ed because OpenAI CEO Sam Altman had recently announced that the company would soon allow “erotica for verified adults.” In response, Adler wrote that he had “major questions” about whether OpenAI had done enough to, in Altman’s words, “mitigate” mental health issues related to how users interact with the company’s chatbots.
After reading Adler’s article, I wanted to talk to him. He graciously accepted an offer to come to the WIRED offices in San Francisco, and in this episode of The big interviewhe talks about what he’s learned in his four years at OpenAI, the future of AI security, and the challenge he poses to the companies providing chatbots to the world.
This interview has been edited for length and clarity.
KATIE DRUMMOND: Before we begin, I want to clarify two things. First of all, you’re unfortunately not the same Steven Adler who played drums in Guns N’ Roses, are you?
STEVEN ADLER: Absolutely correct.
Okay, it’s not you. And secondly, you’ve had a very long career in technology, and specifically in artificial intelligence. So, before we get into it, tell us a little about your career, your background and what you’ve worked on.
I have worked across the AI industry, particularly focusing on security aspects. Most recently, I worked for four years at OpenAI. I’ve worked on basically every dimension of security issues you can imagine: how can we improve products for customers and exclude risks that already exist? And looking a little further, how will we know if AI systems are truly becoming extremely dangerous?

:max_bytes(150000):strip_icc()/Health-GettyImages-1364189287-869a131b9478493c8f5d2a6b22d5e539.jpg?w=390&resize=390,220&ssl=1)

