A New AI Documentary Puts CEOs in the Hot Seat—but Goes Too Easy on Them

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

It’s not easy to get an interview with Sam Altman: just ask Adam Bhala Lough, the filmmaker behind the recent documentary. Sam Altman deeply falsifying.

Lough originally planned a feature exploring the potential and dangers of AI that would center around a conversation with the CEO of OpenAI. But, after his requests were ignored for months, he instead chose to commission a chatbot that mimicked Altman’s speech patterns and approximated his facial expressions through a digital avatar.

The real Altman, however, sat down for the new feature The AI ​​Doc: or how I became an apocaloptimistwhich will be released in theaters on March 27. So do Dario Amodei, CEO of Anthropic, and Demis Hassabis, co-founder and CEO of Google’s DeepMind Technologies. (Though the filmmakers say they requested interviews with Meta’s Mark Zuckerberg and X’s Elon Musk, neither appeared.)

This is an impressive level of access for co-director and documentary protagonist Daniel Roher, whose 2022 documentary Navalnyabout Russian opposition leader Alexei Navalny, won an Oscar. The problem is that once filmed, Altman et al. say few things we haven’t heard before – and they settle for glib answers regarding their responsibilities to the rest of their species. When Roher asks Altman why anyone should trust him to guide the rapid acceleration of AI, given its extreme ramifications, Altman responds: “You shouldn’t. » The line of questioning ends there.

The AI ​​document is framed by Roher’s anxiety over the imminent arrival of his son and first child with his wife, filmmaker Caroline Lindy. He wonders what kind of world his boy will inherit and whether the rise of artificial intelligence will prevent the experiences that make us autonomous adults. In Roher’s early interviews, all of his worst fears seemed to be confirmed. Tristan Harris, co-founder of the nonprofit Center for Humane Technology, delivers one of the worst punches: “I know people who are working on AI risks who don’t expect their kids to make it through high school,” he says, invoking a scenario in which technology demolishes the very infrastructure of traditional education.

Despite the growing sense of panic, Roher and co-director Charlie Tyrell present an admirably solid crash course in AI and the biggest questions it poses, aided by Roher’s insistence on defining the terms in simple language rather than startup buzzwords. Visually, the film is delightfully human, with colorful drawings and paintings by Roher, while whimsical stop-motion sequences hint at the influence of producer Daniel Kwan, Oscar-winning co-director of Everything everywhere at the same time. Vibrant creativity amid portents of doom provides some of the hope Roher desperately seeks.

Yet subsequent interviews with Silicon Valley techno-optimists promising AI capable of defeating disease and climate change — followed by CEOs striking their usual balance between hype and a tone of sober caution — pass without much questioning over grandiose claims. We don’t spend a moment thinking about why or how we should expect the current crop of large fallible language models to give rise to the mythical “artificial general intelligence” (AGI) that would surpass human cognition. There are, at best, euphemisms (from venture capitalist Reid Hoffman, for example) that any benefit will be accompanied by unspecified harm.

Even when major players claim that the near-term implications of AI are as significant as the advent of nuclear weapons, they default to a familiar playbook, presenting their products as singularly consequential in one way or another – implying that only they we can trust them to make progress.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button