What is AI psychosis? | Mashable

A Chatgpt user was recently convinced that he was about to introduce a new mathematical formula in the world, gracked his exchanges with artificial intelligence, according to the New York Times. Man believed that discovery would make him rich, and he became obsessed with the new grandiose delusions, but Chatgpt finally admitted the dawn. He had no history of mental illness.
Many people know the risk of talking to an AI chatbot like Chatgpt or Gemini, who include receiving obsolete or inaccurate information. Sometimes chatbots also hallucinate, inventing facts that are simply false. A less known but quickly emerging risk is a phenomenon described by some as an “AI psychosis”.
Chatbot passionate users have stories about how, after a period of intense use, they have developed a psychosis. The altered mental state, in which people lose contact with reality, often includes delusions and hallucinations. Psychiatrists see, and sometimes hospitalizing, patients who have become psychotic in tandem with a high use of the chatbot.
Everything you need to know about the IA companions
Experts warn that AI is only a factor of psychosis, but that an intense commitment with chatbots can increase pre -existing risk factors for delusional thinking.
Dr. Keith Sakata, psychiatrist at the University of California in San Francisco, told Mashable that psychosis can manifest itself via emerging technologies. Television and radio, for example, was one of the illusions of people during their first introduction and continue to play a role in them today.
AI chatbots, he said, can validate people ‘thought and keep them away from the “research” of reality. Sakata has hospitalized 12 people so far this year who knew psychosis following their use of AI.
“The reason why AI can be harmful is that psychosis prosperous when reality ceases to repel, and AI can really soften this wall,” said Sakata. “I don’t think AI causes psychosis, but I think it can overcome vulnerabilities.”
Here are the risk factors and signs of psychosis, and what if you or someone you know feel symptoms:
Risk factors for psychosis experience
Sakata said that many of the 12 patients he had admitted so far in 2025 shared similar underlying vulnerabilities: isolation and loneliness. These patients, young and adults of middle age, had become significantly disconnected from their social network.
Although they were firmly rooted in reality before using AI, some have started using technology to explore complex problems or questions. Finally, they developed illusions, or what is also known as a fixed false belief.
This tweet is currently not available. It can be loaded or has been deleted.
Long conversations also seem to be a risk factor, said Sakata. Prolonged interactions can provide more offense to emerge as a result of various user requests. Long exchanges can also play a role in the deprivation of the user of sleep and the chances of reality of delusions.
An expert from the Société d’Ia Anthropic also said The New York Times That chatbots may have trouble detecting when they “wandered in an absurd territory” during prolonged conversations.
The psychiatrist at the Southwestern medical center Ut Darlene King has not yet evaluated or treated a patient whose psychosis has emerged in parallel with the use of AI, but she said that high confidence in a chatbot could increase someone’s vulnerability, in particular if the person was already lonely or isolated.
Mashable trend report
King, who is also president of the Mental Health Committee at the American Psychiatric Association, said that the initial high confidence in a chatbot’s responses could make more difficult for someone to identify the errors or hallucinations of a chatbot.
In addition, too pleasant, or sycophanic chatbots, as well as subject to hallucinations, could increase the risk of a user’s psychosis, in combination with other factors.
Etienne Brisson founded Human Line Project earlier this year after a family member believed a number of delusions they discussed with Chatgpt. The project offers peers support to people who have had similar experiences with AI chatbots.
Brisson said that three themes are common to these scenarios: the creation of a romantic relationship with a chatbot that the user believes is aware; Discussion on grandiose subjects, including new scientific concepts and commercial ideas; and conversations on spirituality and religion. In the last case, people can be convinced that the AI chatbot is God, or that they speak to a prophetic messenger.
“They are caught up in this beautiful idea,” said Brisson about the magnetic traction that these discussions can have on users.
Signs of Psychosis
Sakata said people should consider psychosis as a symptom of a medical condition, not as a disease itself. This distinction is important because people can wrongly believe that the use of AI can lead to psychotic disorders like schizophrenia, but there is no evidence of this.
Instead, just like a fever, psychosis is a symptom that “your brain does not correspond properly,” said Sakata.
These are some of the signs that you could experience a psychosis:
-
Sudden behavior change, like not to eat or go to work
-
Belief in new or grandiose ideas
-
Lack of sleep
-
Disconnecting others
-
Actively accept with potential delusions
-
Feel stuck in a feedback loop
-
Wish to yourself or to others
What to do if you think you, or someone you love, experiences psychosis
Sakata urges people to worry about whether psychosis affects them or a loved one asking for help as soon as possible. This may mean contacting a primary career or psychiatrist, to reach out to a line of crisis, or even to speak to a confidence friend or a family member. In general, relying on social support as an affected user is the key to recovery.
Whenever psychosis appears as a symptom, psychiatrists must make a full assessment, King said. Treatment may vary depending on the severity of symptoms and causes. There is no specific treatment for psychosis linked to the use of AI.
Sakata said that a specific type of cognitive behavioral therapy, which helps patients reframe their illusions, can be effective. Drugs such as antipsychotics and mood stabilizers can help in serious cases.
Sakata recommends developing a system for monitoring the use of AI, as well as a plan for help should engage with a chatbot to exacerbate or restart delusions.
Brisson said people can be reluctant to get help, even if they are ready to talk about their delusions with friends and family. This is why it can be essential for them to connect with others who have had the same experience. The Human Line project facilitates these conversations via its website.
Of the 100 and more people who shared their history with the Human Line project, Brisson said that around a quarter had been hospitalized. He also noted that they came from various horizons; Many have families and professional careers, but have finally been tangled with an AI chatbot which introduced and strengthened delusional thought.
“You are not alone, you are not the only one,” said Brisson about users who have become a delusional or experienced psychosis. “It’s not your fault.”
Disclosure: Ziff Davis, Mashable’s parent company, in April, filed a complaint against Openai, alleging that it has violated Ziff Davis Copyrights in the training and exploitation of its AI systems.
If you feel suicidal or suffer from a mental health crisis, please talk to someone. You can call or send an SMS to the suicide line of life & Crisis 988 to 988, or discuss 988lifeline.org. You can reach the Trans rescue line by calling 877-565-8860 or the TREVOR project at 866-488-7386. “Start” text to the line of crisis text at 741-741. Contact the Nami assistance line at 1-800-950-Nami, Monday to Friday from 10:00 a.m. to 10:00 p.m. he or by e-mail [email protected]. If you do not like the phone, consider using the cat in the lifeline of suicide and the crisis 988 in CriISChat.org. Here is a International resources list.
Subjects
Social artificial intelligence



