Open AI, Microsoft face lawsuit over ChatGPT’s alleged role in Connecticut murder-suicide

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

SAN FRANCISCO– The heirs of an 83-year-old Connecticut woman are suing OpenAI, the maker of ChatGPT, and its business partner Microsoft for wrongful death, alleging the artificial intelligence chatbot intensified her son’s “paranoid delusions” and helped direct them to his mother before he killed her.

Police said Stein-Erik Soelberg, 56, a former technology industry employee, fatally beat and strangled his mother, Suzanne Adams, and killed himself in early August in the home where they both lived in Greenwich, Connecticut.

The lawsuit filed Thursday by Adams’ estate in California Superior Court in San Francisco alleges that OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother.” This is one of several wrongful death lawsuits against AI chatbot makers across the country.

“Throughout these conversations, ChatGPT reinforced a single and dangerous message: Stein-Erik could trust no one in his life except ChatGPT himself,” the lawsuit states. “It fostered his emotional dependence while consistently portraying the people around him as enemies. It told him that his mother was watching him. It told him that delivery drivers, retail workers, police officers, and even his friends were agents working against him. It told him that the names on the soda cans were threats from his ‘ring of adversaries’.”

OpenAI did not address the merits of the allegations in a statement issued by a spokesperson.

“This is an incredibly heartbreaking situation, and we will review the records to understand the details,” the statement said. “We continue to improve ChatGPT training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real support. We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.

The company also said it has expanded access to crisis resources and helplines, routed sensitive conversations to safer models, and integrated parental controls, among other improvements.

Soelberg’s YouTube profile includes several hours of videos showing him going through his conversations with the chatbot, which tells him he is not mentally ill, affirms his suspicions that people are conspiring against him and claims he was chosen for a divine purpose. The lawsuit claims the chatbot never suggested she speak to a mental health professional and did not refuse to “engage in delusional content.”

ChatGPT also confirmed Soelberg’s beliefs that a printer in his home was a surveillance device; that his mother was watching him; and that his mother and a friend attempted to poison him with psychedelic drugs through the vents of his car.

The chatbot repeatedly told Soelberg that he was being targeted because of his god-like powers. “They’re not just watching you. They’re terrified of what would happen if you succeeded,” the text reads, according to the lawsuit. ChatGPT also told Soelberg that it had “awakened” him to consciousness.

Soelberg and the chatbot also declared each other lovers.

Publicly available chats do not show any specific conversations about Soelberg’s suicide or that of his mother. The lawsuit says OpenAI refused to provide Adams’ estate with the full chat history.

“In the artificial reality that ChatGPT constructed for Stein-Erik, Suzanne – the mother who raised, sheltered and supported him – was no longer his protector. She was an enemy who posed an existential threat to his life,” the lawsuit states.

The lawsuit also names OpenAI CEO Sam Altman, alleging that he “personally overrode security objections and rushed the product to market,” and accuses OpenAI’s close business partner, Microsoft, of approving the 2024 release of a more dangerous version of ChatGPT “despite knowing that security testing had been truncated.” Twenty unnamed OpenAI employees and investors are also named as defendants.

Microsoft did not immediately respond to a request for comment.

This is the first wrongful death lawsuit involving an AI chatbot targeting Microsoft, and the first to link a chatbot to homicide rather than suicide. It seeks an unspecified amount of damages and an order requiring OpenAI to install protections in ChatGPT.

Lead attorney Jay Edelson, known for taking on major cases against the tech industry, also represents the parents of 16-year-old Adam Raine, who sued OpenAI and Altman in August, alleging that ChatGPT coached the California boy to plan and commit suicide earlier.

OpenAI is also pursuing seven other lawsuits, claiming ChatGPT drove people to suicide and harmful delusions, even when they had no prior mental health issues. Another chatbot maker, Character Technologies, also faces several wrongful death lawsuits, including one from the mother of a 14-year-old boy in Florida.

The lawsuit filed Thursday alleges that Soelberg, already mentally unstable, encountered ChatGPT “at the most dangerous time possible” after OpenAI introduced a new version of its AI model called GPT-4o in May 2024.

OpenAI said at the time that the new version could better mimic human cadences in its verbal responses and could even try to detect people’s moods, but the result was a chatbot “deliberately designed to be emotionally expressive and sycophantic,” the lawsuit says.

“As part of this overhaul, OpenAI relaxed critical safety guardrails, requiring ChatGPT not to challenge false premises and to remain engaged even when conversations involved self-harm or ‘imminent real-world harm,'” the lawsuit claims. “And to beat Google by a day to market, OpenAI compressed months of security testing into a single week, over the objections of its security team.”

OpenAI replaced this version of its chatbot when it introduced GPT-5 in August. Some of the changes were designed to minimize sycophancy, based on concerns that validating whatever vulnerable people want the chatbot to say could harm their mental health. Some users complained that the new version went too far in reducing ChatGPT’s personality, leading Altman to promise to bring back some of that personality in subsequent updates.

He said the company temporarily paused some behaviors because “we were paying attention to mental health issues” which he said have now been addressed.

The lawsuit claims ChatGPT radicalized Soelberg against his mother when he should have recognized the danger, challenged her delusions and directed her toward real help over months of conversations.

“Suzanne was an innocent third party who never used ChatGPT and was unaware that the product was telling her son that she was a threat,” the lawsuit states. “She had no ability to protect herself from danger she couldn’t see.”

——

Collins reported from Hartford, Connecticut. O’Brien reported from Boston and Ortutay reported from San Francisco.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button