The Real Demon Inside ChatGPT

But perhaps the most convincing evidence suggesting that Chatgpt regurgitated the language of Warhammer 40,000 is that he continued to ask if the Atlantic was interested in the PDF. The workshop of the Games publishing division, the British company which has the Warhammer franchise, regularly issues rules and updated guides to various characters. The purchase of all these books can be expensive, so some fans are trying to find hacked copies online.
The Atlantic and the Openai refused to comment.
Earlier this month, the newsletter Garbage day have reported similar experiences that an eminent technological investor may have had with Chatgpt. On social networks, the investor shared screenshots of his conversations with the chatbot, in which he referred to a disturbing consonance entity which he called a “non -governmental system”. He seemed to believe that it had “had a negative impact on 7,000 lives” and “extinct 12 lives, each entirely traced to a model”. Other figures from the technological industry said that the posts were worrying about the investor’s mental health.
According to Garbage dayThe investor’s conversations with Chatgpt closely resemble the writing of a science fiction project that started in the late 2000s called SCP, which means “secure, contain, protect”. Participants invent different SCPs – essentially frightening objects and mysterious phenomena – then write fictitious relationships analyzing them. They often contain things such as classification numbers and references to invented scientific experiences, the details that have also appeared in the investor’s cat newspapers. (The investor did not respond to a request for comments.)
There are many other more commonplace examples of what can be considered as the context of AI context. The other day, for example, I did a Google research for “cavitation surgery”, a medical term that I had seen in a random Tiktok video. At the time, the higher result was an “preview of the AI” generated automatically explaining that cavitation surgery is “focused on the elimination of the infected bone tissue or death of the jaw”.
I have not found any deemed scientific studies describing such a condition, not to mention research maintaining that surgery is a good way to treat it. The American Dental Association does not mention the “cavitation surgery” anywhere on its website. It turns out that the preview of Google’s AI, it turns out that sources like blog articles promoting alternative “holistic” dentists across the United States. I learned this by clicking on a small icon next to the AI preview, which opened a list of links that Google had used to generate its response.
These quotes are clearly better than nothing. Jennifer Kutz, Google spokesperson, said: “We are showing support ties so that people can dig more deeply and learn more about the sources on the web.” But when the links appear, Google AI has often already provided a satisfactory response to many requests, which reduces the visibility of annoying details such as the website where the information came and the identity of its authors.
What remains is the language created by AI, which, devoid of an additional context, can naturally seem authoritarian to many people. In recent weeks, technology leaders have repeatedly used that rhetoric involving a generative AI is a source of expert information: Elon Musk said that his latest AI model is “better than the level of doctorate” in each academic discipline, without “no exception”. The CEO of Openai, Sam Altman, wrote that automated systems are now “smarter than people in many ways” and predicts that the world is “about to build a digital superintendent”.
Individual humans, however, generally do not have expertise in a wide range of domains. To make decisions, we take into account not only information itself, but where it comes from and how it is presented. Although I don’t know anything about jaw biology, I generally do not read random marketing blogs when I try to discover medicine. But AI tools often erase the type of context that people need to make instant decisions about the place to direct their attention.
The open Internet is powerful because it connects people directly to the largest human knowledge archives that the world has ever created, covering everything, from the paintings of the Italian Renaissance to the comments of Pornhub. After having ingested all of this, AI companies used what is equivalent to the collective history of our species to create software that obscures its richness and complexity. Becoming too dependent on this can deprive people from the opportunity to draw conclusions by examining the evidence for themselves.



