The Doomers Who Insist AI Will Kill Us All

The subtitle of The Bible Doom which will be published by the prophets of the extinction of the Eliezer Yudkowsky and Nate Soares later this month is “why the superhuman Ai would kill us all”. But it should really be “why superhuman the IA will kill us all”, because even the co-authors do not believe that the world will take the necessary measures to prevent the AI from eliminating all non-superb humans. The book is beyond the darkness, reading as notes scribbled in a prison cell weakly lit the day before an execution of dawn. When I meet these self -proclaimed Cassandras, I ask them squarely if they believe that they personally meet their ends thanks to a machination of superintendent. The answers come quickly: “yeah” and “yup”.
I am not surprised, because I read the book – the title, by the way, If someone builds it, everyone dies. However, it is a shock to hear this. This is something to write about cancer statistics and another thing to talk about reconciling with a deadly diagnosis. I ask them how they think the end will come for them. Yudkowsky first dodges the answer. “I do not spend much time imagining my disappearance, because it does not seem to be a useful mental concept to face the problem,” he says. Under pressure, he yields. “I guess death is suddenly dead,” he says. “If you want a more accessible version, something about the size of a mosquito or maybe a mite landed at the back of my neck, and that’s it.”
The technical details of his mortal shot imagined by a mite propelled by AI are inexplicable, and Yudowsky does not think that it is worth understanding how it would work. He probably couldn’t understand it anyway. Part of the central argument of the book is that superintelligence will offer scientific things that we cannot understand more than the people of the cellar could not imagine microprocessors. Co -authoritator Soares also says that he imagines that the same thing will happen to him but adds that, like Yudkowsky, does not spend much time lingering on the details of his disappearance.
We have no chance
The reluctance to visualize the circumstances of their personal disappearance is a strange thing to hear people who come from co-author a whole book on everyone disappearance. For aficionados doomer-porn, If someone builds it Is reading appointment. After crossing the book, I understand the vagueness to nail the method by which AI finishes our lives and all human lives thereafter. The authors speculate a little. Boil the oceans? Block the sun? All assumptions are probably wrong, because we are locked in a state of mind in 2025, and AI will reflect in advance.
Yudkowsky is the most famous APOSTAT of AI, going from the researcher to Grim Reaper years ago. He even made a TED speech. After years of public debate, he and his co-author have an answer for each counter argument launched against their disastrous disorder. To start, it may seem counter-intuitive that our days are numbered by LLMS, which often stumble on a simple arithmetic. Do not be fooled, says the authors. “The AIS will not remain stupid forever,” they write. If you think that the superinagent ais will respect the borders that humans trace, forget, they say. Once the models are starting to learn to become smarter, AIS will develop “preferences” by themselves that will not align with what we, humans, want them to prefer. Finally, they will not need us. They will not be interested in us as a conversation partners or even as pets. We would be a nuisance, and they would have decided to eliminate us.
The fight will not be fair. They believe that at the beginning, AI could require human aid to build its own factories and laboratories-that is to say by stealing money and welding people to help it. Then it will construct things that we cannot understand, and that kind of things were fine. “In one way or another,” write these authors, “the world fades in black”.
The authors consider the book as a kind of shock treatment to browse the humanity of its complacency and adopt the drastic measures necessary to stop this unimaginably bad conclusion. “I expect to die of this,” says Soares. “But the fight is not over as long as you are not dead.” Too bad, then, that the solutions they offer to stop the devastation seem even more eccentric than the idea that the software will assassinate us all. Everything comes down to this: hit the brakes. Watch the data centers to make sure they don’t nourish superintendent. Bombing those who do not respect the rules. Stop publishing articles with ideas that accelerate walking towards superintelligence. Would they have prohibited, I ask them for the 2017 document on the transformers who launched the AI generator movement. Oh yes, they would have done it, they answer. Instead of cat-gpt, they want CIAO-GPT. Good luck by stopping this billion dollars industry.
Play the odds
Personally, I do not see my own light smothered by a bite in the neck by a super advanced dust fate. Even after reading this book, I don’t think it is likely that AI kills us all. Yudksowky has already tried the fan-fiction of Harry Potter, and the fanciful extinction scenarios which he turns too bizarre for my puny human brain to accept. I suppose that even if superintelligence wants to get rid of us, it will trip to promulgate its genocidal plans. AI could be able to whip humans in a fight, but I will bet against this in a battle with Murphy’s law.
However, the theory of disaster does not seem impossibleEspecially since no one has really established a ceiling for the way the intelligent AI can become. Studies also show that advanced AI has picked up many unpleasant attributes of humanity, even considering blackmail to avoid recycling, in an experience. It is also disturbing that some researchers who spend their lives building and improving AI think that there is a non -trivial chance that the worst can occur. An investigation said that almost half of the respondent AI scientists have enabled the chances that a species to collapse at 10% or more chance. If they believe that, it is crazy that they will work every day to make AGI happen.
My instinct tells me that the Yudkowsky and Soares Spin scenarios are too bizarre to be true. But I can’t be Of course They are wrong. Each author dreams of their book being a lasting classic. Not so much these two. If they are right, there will be no one around to read their book in the future. Just a lot of decaying bodies which once felt a slight pinch at the back of their neck, and the rest was silence.


