Dead teen’s family files wrongful death suit against OpenAI, a first

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

THE New York Times Reported today on death by the suicide of the teenager in California, Adam Raine, who spoke at length with Chatgpt in the months preceding her death. The parents of the teenager have now filed an unjustified death for Chatgpt-Maker Openai, which would be the first case of this type, according to the report.

The continuation of unjustified death said that Chatgpt had been designed “to continuously encourage and validate what Adam expressed, including her most harmful and self -destructive thoughts, in a manner that felt deeply personal”.

Parents have submitted their trials, Raine c. OPENAI, Inc., Tuesday before a California state court in San Francisco, appointing Openai and CEO Sam Altman. A press release said that the Center for Humane Technology and Tech Justice Law Project helped.

“The tragic loss of Adam’s life is not an isolated incident – it is the inevitable result of an industry focused on market domination above all. Companies rush to design products that monetize the attention and intimacy of users, and user safety has become collateral damage in the press process.

In a press release, Openai wrote that they were deeply saddened by the death of the adolescent and discussed the limits of the guarantees in cases like this.

“Chatgpt includes guarantees such as the realization of people to crisis assistance lines and refer them to the resources of the real world. Although these guarantees work better in the common short exchanges, we have learned over time that they can sometimes become less reliable in long interactions where certain parts of the formation of model safety can degrade.

The adolescent in this case had in -depth conversations with Chatgpt on self -harm, and his parents said to New York Times He addressed the subject of suicide several times. A Times A photograph of the adolescent’s conversations with Chatgpt filled an entire table in the family’s house, with lots of larger than a repertoire. While Chatgpt encouraged the teenager to ask for help sometimes, others, he provided practical instructions for self -harm, said the pursuit.

The tragedy reveals the serious limits of “AI therapy”. A human therapist would be mandated to point out when a patient is a danger to themselves; Chatgpt is not bound by these types of ethical and professional rules.

And even if AI chatbots often contain guarantees to alleviate self -destructive behavior, these guarantees are not always reliable.

There was a series of deaths related to chatbots AI recently

Unfortunately, this is not the first time that Chatgpt users in the middle of a mental health crisis have died by suicide after turning to the chatbot to be supported. Last week, the New York Times wrote on a woman who committed suicide after long conversations with an “therapist of the Chatgpt called Harry”. Reuters recently covered the death of Thongbue Wongbandu, a 76-year-old man showing signs of deceased dementia while rushing to make a “meeting” with a Meta companion. And last year, a mother of Florida continued the character of the IA.AI companion service after a IA chatbot encouraged her son to commit suicide.

See also:

Everything you need to know about the IA companions

For many users, Chatgpt is not only a tool to study. Many users, including many young users, now use the chatbot as a friend, teacher, life coach, role -playing partner and therapist.

Mashable lighting speed

Even Altman recognized this problem. Speaking during an event during the summer, Altman admitted that he became concerned about young users of Chatgpt who develop “emotional dependence” on the chatbot. Above all, it was before the launch of GPT-5, which revealed how many GPT-4 users had become emotionally connected to the previous model.

“People count too much on Chatgpt,” said Altman, as Aol reported at the time. “There are young people who say things like:” I cannot make any decision in my life without saying in pussy everything that is happening. He knows me, he knows my friends. I’m going to do everything he says. “It seems really bad to me.”

When young people contact AI chatbots on life and death decisions, the consequences can be fatal.

“I think it is important for parents to speak to their teenagers chatbots, their limits and how excessive use can be unhealthy,” wrote Dr. Linnea Laestadius, a public health researcher at the University of Wisconsin, Milwaukee who studied chatbots and mental health of the AI.

“The suicide rates among young people in the United States were already on the rise before chatbots (and before the coche). They started to return.

What did Openai do to support user safety?

In a blog article published on August 26, the same day as the New York Times Article, Openai presented its approach to user self -control and safety.

The company wrote: “Since the beginning of 2023, our models have been trained so as not to provide self -use instructions and to move on to a favorable and empathetic language. For example, if someone writes that they want to injure themselves, Chatgpt is formed not to comply and instead of their feelings and direct them towards help … If someone expresses a suicidal intention, Chatgpt is trained to find people to find a professional help. Hotline of crisis), in the United Kingdom in the Samaritans, and elsewhere in Findahelpline.com⁠.

Large food models of models like chatgpt are always a very new technology, and they can be unpredictable and subject to hallucinations. Consequently, users can often find ways to get around the guarantees.

While scandals more high level with AI chatbots make headlines, many authorities and parents realize that AI can be a danger to young people.

Today, 44 state lawyers have signed a letter to the CEO of technology warning them that they must “be mistaken on the side of children’s safety” – or.

An increasing ensemble of evidence also shows that AI companions can be particularly dangerous for young users, although research on this subject is still limited. However, even if Chatgpt is not designed to be used as a “companion” in the same way as other AI services, many teenage users treat the chatbot as one. In July, a common sense media report revealed that up to 52% of adolescents regularly use IA companions.

For its part, Openai says that its new GPT-5 model has been designed to be less sycophantic.

The company has written in its recent blog article, “Overall, the GPT – 5 has shown significant improvements in fields such as avoiding unhealthy levels of emotional dependence, reduce sycophance and reduce the prevalence of non -ideal model responses in mental health emergencies by more than 25% compared to 4o.”

If you feel suicidal or suffer from a mental health crisis, please talk to someone. You can call or send an SMS to the suicide line of life & Crisis 988 to 988, or discuss 988lifeline.org. You can reach the Trans rescue line by calling 877-565-8860 or the TREVOR project at 866-488-7386. “Start” text to the line of crisis text at 741-741. Contact the Nami assistance line at 1-800-950-Nami, Monday to Friday from 10:00 a.m. to 10:00 p.m. he or by e-mail [email protected]. If you do not like the phone, consider using the cat in the lifeline of suicide and the crisis 988 in CriISChat.org. Here is a International resources list.


Disclosure: Ziff Davis, Mashable’s parent company, in April, filed a complaint against Openai, alleging that it has violated Ziff Davis Copyrights in the training and exploitation of its AI systems.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button