The AI industry has a big Chicken Little problem

Entrepreneur Matt Shumer’s essay, “Something Big Is Happening,” is going mega-viral on X, where it has been viewed 42 million times and counting.
The article warns that the rapid progress made in the AI industry in recent weeks threatens to change the world as we know it. Shumer specifically compares the present moment to the weeks and months before the COVID-19 pandemic, and says most people won’t hear the warning “until it’s too late.”
We’ve heard warnings like this before from those who condemn AI, but Shumer wants us to believe that this time the ground really is shifting beneath our feet.
“But it’s time now,” he wrote. “Not in a ‘at some point we should talk about this’ way. In a ‘this is happening right now and I need you to understand it’ way.”
This Tweet is currently unavailable. It may be loading or has been deleted.
Unfortunately for Shumer, we’ve heard warnings like this before. We heard it again, and again, and again, and again, and again, and again, and again. In the long run, some of these predictions will surely come true – many people much smarter than me certainly believe that – but I’m not changing my weekend plans to build a bunker.
The AI industry now faces a massive Chicken Little problem, making it difficult to take such dire warnings too seriously. Because, as I’ve written before, when an AI entrepreneur tells you that AI is a world-changing technology on the order of COVID-19 or the agricultural revolution, you need to take that message for what it really is: a sales pitch.
Don’t make me press my sign.
Why people are so worried about AI right now
Shumer’s essay claims that the latest generative AI models from OpenAI and Anthropic are already capable of doing much of its work.
“Here’s what no one outside of the tech sector really understands yet: The reason so many people in the industry are sounding the alarm right now is because this has already happened We. We don’t make predictions. We tell you what’s already happened in our own work and warn you that you’re next. »
The message clearly resonated with X. Across the political spectrum, high-profile accounts with millions of followers are sharing the message as an urgent warning.
This Tweet is currently unavailable. It may be loading or has been deleted.
This Tweet is currently unavailable. It may be loading or has been deleted.
This Tweet is currently unavailable. It may be loading or has been deleted.
This Tweet is currently unavailable. It may be loading or has been deleted.
This Tweet is currently unavailable. It may be loading or has been deleted.
To understand Shumer’s message, you need to understand big concepts like AGI and the singularity. AGI, or artificial general intelligence, is a hypothetical AI program that “possesses human-like intelligence and can perform any intellectual task that a human can.” Singularity refers to a threshold at which technology improves on its own, allowing it to advance exponentially.
Crushable speed of light
Shumer is correct that there is good reason to believe that progress has been made toward both AGI and the Singularity.
OpenAI’s latest coding model, GPT-5.3-Codex, helped create it. Anthropic has made similar statements about recent product launches. And there’s no denying that generative AI is now so good at writing code that it has decimated the job market for entry-level coders.
It is absolutely true that generative AI is advancing rapidly and will surely have a significant impact on daily life, the job market and the future.
Despite this, it’s hard to believe a weather report from Chicken Little. And it’s even harder to believe everything a car salesman tells you about the amazing new convertible that just pulled into the parking lot.
Indeed, as Shumer’s post went viral, AI skeptics joined the fray.
This Tweet is currently unavailable. It may be loading or has been deleted.
This Tweet is currently unavailable. It may be loading or has been deleted.
This Tweet is currently unavailable. It may be loading or has been deleted.
It’s not time to panic yet
There are many reasons to be skeptical of Shumer’s claims. In the essay, he provides two specific examples of the capabilities of generative AI: its ability to conduct legal reasoning comparable to that of top lawyers and its ability to create, test, and debug applications.
Let’s look at the application argument first:
I’ll say to the AI, “I want to build this app. Here’s what it should do, here’s roughly what it should look like. Figure out the user flow, the design, all that.” And it is. He writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, he opens the application itself. He clicks the buttons. It tests the functionalities. He uses the app like a person would. If he doesn’t like the look or feel of something, he goes back and changes it on his own. He iterates, like a developer would, correcting and refining until he’s happy. Only once he has decided that the application meets his own standards does he come back to me and say, “It’s ready to be tested. » And when I test it, it’s usually perfect.
I’m not exaggerating. This is what my Monday looked like this week.
Is it impressive? Absolutely!
At the same time, it is a running joke in the world of technology that we can already find an application for All. (“There’s an app for that.”) This means that coding models can model their work from tens of thousands of existing applications. Will the world really change irrevocably because we now have the ability to create new applications faster?
Let’s look at the legal claim, in which Shumer says that AI is “like having a team of [lawyers] available instantly.” There’s just one problem: Lawyers across the country are being censored for actually using AI. A lawyer tracking AI hallucinations in the legal profession has found 912 documented cases so far.
It’s hard to swallow the warnings about AGI when even the most advanced LLMs are still completely incapable of fact-checking. According to OpenAI’s own documentation, its latest model, GPT-5.2, has a 10.9% hallucination rate. Even when he has access to the Internet to check how it works, he still hallucinates 5.8% of the time. Would you trust a person who only hallucinates six percent of the time?
Yes, it is possible that a rapid leap forward is imminent. But it’s also possible that the AI industry will quickly reach a point of diminishing returns. And there are good reasons to believe that the latter solution is likely. This week, OpenAI introduced ads to ChatGPT, a tactic it previously called “last resort.” OpenAI is also rolling out a new “Adult ChatGPT” mode to allow people to participate in erotic role-playing games with Chat. This is not the behavior of a company preparing to deploy artificial superintelligence to an unsuspecting world.
This Tweet is currently unavailable. It may be loading or has been deleted.
This article reflects the opinion of the author.
Disclosure: Ziff Davis, the parent company of Mashable, filed a lawsuit in April 2025 against OpenAI, alleging that it had violated Ziff Davis’ copyrights in the training and operation of its AI systems.
Topics
Artificial intelligence




