OpenAI shares data on ChatGPT users with suicidal thoughts, psychosis

OpenAI has released new estimates of the number of ChatGPT users who are experiencing possible signs of mental health emergencies, including mania, psychosis, or suicidal thoughts.
The company said that around 0.07% of active ChatGPT users in a given week showed such signs, adding that its artificial intelligence (AI) chatbot recognizes and responds to these sensitive conversations.
Although OpenAI maintains that these cases are “extremely rare”, critics say that even a small percentage can represent hundreds of thousands of people, since ChatGPT recently reached 800 million weekly active users, according to boss Sam Altman.
As scrutiny intensifies, the company said it has built a network of experts around the world to advise it.
These experts include more than 170 psychiatrists, psychologists and primary care physicians who have practiced in 60 countries, the company said.
They designed a series of responses in ChatGPT to encourage users to seek help in the real world, according to OpenAI.
But the company’s data overview raised eyebrows among some mental health professionals.
“Even though 0.07% seems like a small percentage, at the level of a population with hundreds of millions of users, it may actually represent a number of people,” said Dr. Jason Nagata, a professor who studies technology use among young adults at the University of California, San Francisco.
“AI can expand access to mental health support and in some ways support mental health, but we need to be aware of its limitations,” added Dr. Nagata.
The company also estimates that 0.15% of ChatGPT users have conversations that include “explicit indicators of planning or potential suicidal intent.”
OpenAI said recent updates to its chatbot are designed to “respond safely and empathetically to potential signs of delusion or mania” and note “indirect signals of self-harm or suicide risk.”
ChatGPT has also been trained to redirect sensitive conversations “from other models to safer models” by opening in a new window.
In response to questions from the BBC about criticism of the number of people potentially affected, OpenAI said that this small percentage of users represents a significant number of people and noted that it was taking the changes seriously.
The changes come as OpenAI faces growing legal scrutiny over how ChatGPT interacts with users.
In one of the most high-profile lawsuits recently filed against OpenAI, a California couple sued the company over the death of their teenage son, alleging that ChatGPT encouraged him to commit suicide in April.
The lawsuit was filed by the parents of 16-year-old Adam Raine and was the first legal action accusing OpenAI of wrongful death.
In another case, the suspect in an August murder-suicide in Greenwich, Connecticut, posted hours of his conversations with ChatGPT, which appeared to fuel the alleged perpetrator’s delusions.
More users are suffering from AI psychosis because “chatbots create the illusion of reality,” said Professor Robin Feldman, director of the AI Law & Innovation Institute at the University of California Law School. “It’s a powerful illusion.”
She said OpenAI deserved credit for “sharing statistics and for its efforts to improve the problem,” but added: “the company may display all kinds of warnings on the screen, but a mentally at-risk person may not be able to heed those warnings.”




