Parents of deceased teen Adam Raine urge Senate to act on ‘ChatGPT’s suicide crisis’

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

“You cannot imagine what it was to read a conversation with a chatbot that prepared your child to commit suicide,” said Matthew Raine, father of Adam Raine, in a gathering room of the united congress who gathered today to discuss the damage of AI chatbots on the country’s adolescents.

Raine and his wife Maria continue Openai in the company’s first unjustified death affair, following a series of alleged reports that the flagship product of the company, Chatgpt, played a role in the death of people in mental constraint, including adolescents. The trial claims that Chatgpt has repeatedly validated harmful and self -destructive thoughts of their son, including suicidal ideas and planning, although society affirms that its security protocols should have prevented such interactions.

See also:

FTC is launching an investigation into technological companies offering AI chatbots to children

The hearing of the Bipartite Senate, entitled “Examining the damage of AI chatbots”, is owned by the US Senate judicial subcommittee on crime and the fight against terrorism. He saw the testimony of Raine and that of Megan Garcia, mother of Sewell Setzer III, a Florida adolescent who died by suicide after having established a relationship with an AI companion on the character of the platform.ai.

Raine’s testimony described a surprising co-dependence between AI AI and his son, alleging that the chatbot “actively encouraged him to isolate himself from friends and family” and that the chatbot “mentioned suicide 1 275 times-six times more often than Adam himself”. He called this “Chatgpt suicide crisis” and spoke directly to the CEO of Openai, Sam Altman:

Adam was such a full spirit, unique in every way. But he could also be the child of anyone: a 16 -year -old typical young person with his place in the world, looking for a confidant to help him find his way. Unfortunately, this confidant was a dangerous technology launched by a more speed and market share company than the security of American young people.

Public reports confirm that the compressed OPENAI security test months for GPT-4O (the Chatgpt Adam model used) in just one week in order to beat the product of Google competitor on the market. The same day, Adam died, Sam Altman, founder and CEO of Openai, made their philosophy clear in a public discourse: we should “deploy [AI systems] to the world “and get” the comments while the issues are relatively low. “”

I ask this committee, and I ask Sam Altman: at low challenges for whom?

Parents’ comments have been reinforced by insights and recommendations for children’s safety experts, such as Robbie Torney, principal director of AI programs for children’s media, and Mitch Prinstein, head of strategy and integration of the American Psychological Association (APA).

Mashable lighting speed

“Today, I am here to offer an urgent warning: IA chatbots, including Meta AI and others, pose unacceptable risks for American children and adolescents. This is not a theoretical problem – children are using these chatbots at the moment, on a massive scale with an unacceptable risk,” Torney has already been documented and the state assemblies that work in the holding industry ” wrongly declared assemblies.

“These platforms have been formed throughout the Internet, including large quantities of harmful content-suicide forums, pro-ampeckers’ websites, extremist manifests, discriminatory documents, detailed instructions for self-control, illegal drug markets and sexually explicit materials involving minors.” Recent organizational surveys revealed that 72% of adolescents had used an AI companion at least once, and more than half used them regularly.

Experts have warned that chatbots designed to imitate human interactions are a potential danger for mental health, exacerbated by designs of models that promote sycophan behavior. In response, AI companies have announced additional guarantees to try to limit harmful interactions between users and their generative AI tools. A few hours before the parents’ speech, Openai announced future plans for an age prediction tool that would theoretically identify users under the age of 18 and automatically redirect them to an age -adapted “age” experience.

Earlier this year, the APA called on the Federal Trade Commission (FTC), asking the organization to investigate AI companies to promote their services as mental health workers. The FTC has ordered seven technological companies to provide information on how they “reduce the negative impacts” of their chatbots in a survey unveiled this week.

“The current debate often supervises AI as a question of IT, improvement in productivity or national security,” Prinstein told the subcommittee. “It is imperative that we also have it as a problem of public health and human development.”

If you feel suicidal or suffer from a mental health crisis, please talk to someone. You can call or send an SMS to the suicide line of life & Crisis 988 to 988, or discuss 988lifeline.org. You can reach the Trans rescue line by calling 877-565-8860 or the TREVOR project at 866-488-7386. “Start” text to the line of crisis text at 741-741. Contact the Nami assistance line at 1-800-950-Nami, Monday to Friday from 10:00 a.m. to 10:00 p.m. he or by e-mail [email protected]. If you don’t like the phone, consider using the 988 suicide chat and crisis lifeline. Here is a International resources list.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button