AI’s Biggest Risk Is the Story We’re Not Being Told

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

In one of the first shots of Un Chien Andalou, a 1929 French film co-written by Salvador Dalí, often cited as one of the first surrealist films, a young woman looks directly at the camera as a razor blade cuts her eye.

Okay, she didn’t really have her eyes open, thanks to movie magic and all. But the film uses surrealism as a powerful new way of seeing and interpreting the world. It’s supposed to take us out of the status of passive spectator and take us beyond traditional perception.

Last Thursday, as I sat in a conference room at the Salvador Dalí Museum in St. Petersburg, Florida, listening to a talk about emerging technologies and innovation in 2026, I was hoping to have a discussion about equally revolutionary modern innovations.

But all too often when we talk about AI, we don’t approach this potentially revolutionary technology with eyes wide open. Instead, whether it’s small conferences, social media posts, or Super Bowl ads, we get one-sided marketing talk that obscures the real risks and concerns surrounding AI.

AI Atlas

CNET

Based on audience questions during the Q&A session, this was likely the first real introduction to generative and physical AI for many of them. The group absorbed it all uncritically, nodding their heads and bursting with enthusiasm as the lecture painted a picture of a future entirely transformed for the better.

In one particularly cringeworthy case, we were shown video of LG’s laundry folding robot that debuted last month at the CES 2026 show in Las Vegas. Having seen the robot for myself, I knew how slow it involved folding a single uniform sized T-shirt. A robot that can actually help with household chores is years away.

“Who wants this robot?” the speaker shouted, and hands went up all over the room.

Was there any mention of the limitations of the technology, such as the fact that it needs human assistance to reach the basket? Did we mention the prohibitive cost? Of course not. The crowd left the room with their understanding of AI shaped by someone who had studiously avoided mentioning the technology’s downsides.

It’s a problem.

People with platforms – whether tech experts, museum professors, or influencers with millions of followers – have a responsibility to tell the truth about AI. Not just the exciting parts. Not just the elements that contribute to good marketing. All of this.

When public figures highlight the capabilities of AI, they gloss over its risks: the devastating environmental impact, the propensity of chatbots to hallucinate and make things up, the worrying way in which the use of AI is affecting memory abilities, and the increase in incidents of AI-induced psychosis and suicide.

These dangers are conveniently left out of the conversation; conversations that shape public perception in a way that serves the interests of a privileged few, not the world.

We’ve seen this dangerous pattern before.

Since a 2018 U.S. Supreme Court ruling allowed states to legalize sports betting, celebrities and influencers have lined up to promote betting apps, pocketing massive checks while their followers face rising rates of gambling addiction and financial ruin.

The crypto boom of 2021 also brought a parade of celebrities selling digital coins, many of which subsequently collapsed, leaving ordinary people holding worthless assets. Kim Kardashian has settled with the SEC for $1.26 million in penalties for promoting a crypto token without disclosing that she was paid to do so. Matt Damon told us that “fortune favors the brave” in a February 2022 Crypto.com Super Bowl ad that has aged terribly in the wake of that year’s crypto crash.

We are seeing the same story with AI. We’re seeing big-name actors launching into Super Bowl ads championing AI companies for 100 million people. Influencers take money from AI companies to promote tools they probably don’t even use, and probably don’t even understand, to an audience that trusts them.

The difference is that the risks of AI go beyond financial losses. We’re talking about job cuts, the erosion of creative industries, the widespread distribution of misinformation, deepfakes that can destroy reputations, and, as previously mentioned, the environmental cost of exploiting these massive models.

This is why I appreciate artists like Guillermo del Toro who talk realistically about AI. When models referencing his distinctive visual style went viral, he minced no words about generative AI being trained on artists’ work without their permission, compensation, or compliance with copyright laws. He called it theft.

Other artists and public figures have been equally blunt about the threat AI poses to their livelihoods and crafts. Meanwhile, tech executives and developers dismiss these concerns as the latest wave of Luddism.

Although I generally believe that famous people are not role models to follow or trust, many people do. They assume that if someone with credentials or a celebrity enthusiastically promotes something, it must be safe, beneficial, and unavoidable. With this public trust comes responsibility.

If you insist on talking about AI in public, taking $600,000 to promote Microsoft Copilot to millions of people on social media or, if you’re the NFL, partnering with an AI company on a commercial broadcast at America’s biggest sporting event, you have an obligation to present the complete picture, especially to audiences who are just learning about it.

Talk about boundaries. Talk about jobs being lost. Mention artists whose works are scraped without consent to form these models. Recognize the staggering energy consumption. Explain how easy it is to generate convincing misinformation. Disclose if you are paid by an AI company to say what you say.

This doesn’t mean you can’t discuss the possibilities and benefits of AI. It has real potential to accelerate drug discovery, improve disease progression and solve complex problems. But presenting it as pure progress and innovation – as unadulterated good – is ignorant or misleading.

Like the surrealist works born after the First World War, AI is revolutionary, provocative and disruptive. They both challenge the way we see the world.

But surrealism was intentional and deeply human, rooted in our minds, our expressions and our emotions. Generative AI is machine-driven pattern recognition. Surrealism was created to challenge conventions and achieve ultimate truth and authenticity.

We still deserve the truth now. The conversation around AI is happening, whether we like it or not, and it’s happening quickly. The least we can ask is that the people leading this conversation explain the facts to us.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button