AI Psychosis Is Rarely Psychosis at All

A new trend emerges in psychiatric hospitals. People in crisis arrive with false beliefs, sometimes dangerous, grandiose delusions and paranoid thoughts. A common thread connects them: marathon conversations with IA chatbots.
Wired spoke with more than a dozen psychiatrists and researchers, who are increasingly concerned. In San Francisco, the UCSF psychiatrist, Keith Sakata, says he counted a dozen cases serious enough to justify hospitalization this year, cases in which artificial intelligence “played an important role in their psychotic episodes”. As this situation takes place, a more tense definition has taken off in the headlines: “AI psychosis”.
Some patients insist that robots are sensitive or run new major theories of physics. Other doctors tell patients locked up in days of back and forth with the tools, arriving at the hospital with thousands and thousands of pages of transcription detailing how robots had supported or reinforced from the obviously problematic thoughts.
Reports like this accumulate and the consequences are brutal. Users in difficulty and family and friends described spirals that have led to lost jobs, rupture of relations, admissions to involuntary hospital, prison sentence and even death. However, clinicians tell Wired that the medical community is divided. Is it a distinct phenomenon that deserves its own label, or a familiar problem with a modern trigger?
AI psychosis is not a recognized clinical label. However, the sentence has spread in reports and on social networks as a descriptor of Catchall for a sort of mental health crisis after prolonged chatbot conversations. Even industry leaders invoke him to discuss the many emerging mental health problems linked to AI. In Microsoft, Mustafa Suleyman, CEO of the AI division of the technology giant, warned a blog article last month of the “psychosis risk”. Sakata says it is pragmatic and uses the sentence with people who already do it. “It is useful as stenography to discuss a real phenomenon,” explains the psychiatrist. However, it is quick to add that the term “can be misleading” and “may simplify complex psychiatric symptoms.
This excessive simplification is exactly what concerns that many psychiatrists are starting to tackle the problem.
Psychosis is characterized as a reality gap. In clinical practice, it is not a disease but a “constellation of symptoms, in particular hallucinations, disorders of thought and cognitive difficulties”, explains James Maccabe, professor in the department of studies of psychosis at King’s College in London. It is often associated with health problems such as schizophrenia and bipolar disorder, although episodes can be triggered by a wide range of factors, including extreme stress, consumption of substances and sleep deprivation.
But according to MacCabe, the reports of Cases of Psychosis of AI are concentrated almost exclusively on delusions – very detained but of false beliefs which cannot be shaken by contradictory evidence. While recognizing that some cases can meet the criteria of a psychotic episode, Maccabe says “there is no evidence” that AI has an influence on the other characteristics of psychosis. “It is only the delusions that are affected by their interaction with AI.” Other patients reporting mental health problems after engaging with chatbots, maccabe notes, have delusions without further characteristic of psychosis, a condition called delusional disorder.

:max_bytes(150000):strip_icc()/HDC-GettyImages-2144076958-16bcb2a00c4b4d1784fad6e324dee52e.jpg?w=390&resize=390,220&ssl=1)