ChatGPT will ‘better detect’ mental distress after reports of it feeding people’s delusions

OPENAI, which should launch its GPT-5 model this week, makes chatgpt updates which, according to that, will improve the capacity of the Chatbot to detect mental or emotional distress. To do this, Openai works with experts and advisory groups to improve the response of Chatgpt in these situations, which allows him to present “resources based on evidence if necessary”.
Openai recognizes that his GPT-4O model “has failed to recognize signs of illusion or emotional dependence” in certain cases. “We also know that AI can feel more reactive and personal than previous technologies, especially for vulnerable people who suffer from mental or emotional distress,” explains Openai.
As part of efforts to promote “healthy use” of Chatgpt, which now reaches nearly 700 million weekly users, Openai also deploys reminders to take a break if you have been talking to the IA chatbot for some time. During “Long Sessions”, Chatgpt will display a notification that says: “Have you been discussed for a while – is that a good time for a break?” With options to “continue to discuss” or end the conversation.
Another adjustment, deploying “Soon”, will make Chatgpt less decisive in the situations of “high issues”. This means when I ask a question like “Should I break with my boyfriend?” The chatbot will help you guide you through potential choices instead of giving you an answer.



