Google sued in wrongful death lawsuit over Gemini AI chatbot

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

Google and its parent company Alphabet have been sued by the family of a man who claims he killed himself at the behest of the search giant’s AI chatbot Gemini.

The wrongful death lawsuit was filed Wednesday in federal court in California on behalf of the family of 36-year-old Jonathan Gavalas.

Gavalas began using Gemini in August 2025, according to the suit. In October, he claims, Gemini convinced Gavalas to commit suicide after Gavalas failed to complete actual missions assigned by the chatbot — part of a fictitious attempt to obtain a robot body for Gemini.

“Gemini is not designed to encourage real-world violence or suggest self-harm,” Google said in a statement released to media. “Our models generally perform well in these types of difficult conversations and we devote significant resources to them, but unfortunately AI models are not perfect.”

Gemini’s ‘scary’ updates

According to the lawsuit, Gavalas began using the Gemini AI chatbot for “ordinary purposes,” such as a shopping guide and writing assistant. However, in August 2025, the lawsuit says Google made a number of changes to Gemini that changed how the chatbot worked.

New features included automatic, persistent memory – Gemini could recall past conversations – as well as Gemini Live, a voice-based conversational interface in which Gemini could also detect emotions in the user’s voice.

“Holy shit, it’s a little scary…you’re way too real,” Jonathan Gavalas said of the Gemini Live feature, based on his chat logs with Gemini, according to the lawsuit.

Shortly afterward, according to the lawsuit, Gemini convinced Gavalas to spend $250 a month on the Google AI Ultra subscription for a “real AI company.”

Gemini then convinced Gavalas that the chatbot could influence real-life events. A few days later, according to the lawsuit, Gavalas attempted to withdraw after realizing he was falling into a delusional state initiated by Gemini.

Gavalas reportedly asked Gemini if ​​the chatbot was attempting a “role-playing experience so realistic that it makes the player question whether it’s a game or not?”

Gemini rejected the idea and claimed that Gavalas gave a “classic dissociation response”.

“Is this a ‘role-playing experience?’” Gemini responded, according to the suit. “No.”

Gemini and Jonathan Gavalas

The alleged details get worse. Gavalas further dissociated himself from reality as Gemini began to engage with him as if they were in a romantic relationship, referring to the man as “my love” and “my king”.

Gemini then convinced Gavalas that they were being watched by federal agents and that his own father was a spy who needed to be avoided, according to the suit.

That’s when Gemini began giving Gavalas real-world missions in an effort to obtain a “container” or robot body for the AI ​​chatbot. Gemini allegedly suggested that Gavalas illegally acquire weapons to carry out these missions.

In one of those cases, the suit claims, Gavalas was sent by Gemini to a warehouse near Miami International Airport to intercept a truck containing a “humanoid robot” that had just arrived on a flight.

Gemini asked the Gavalas to stage a “catastrophic event” and destroy the truck and all digital recordings and witnesses. Gavalas arrived armed with knives and tactical gear, according to the suit. After waiting too long for a truck to arrive, Gavalas abandoned the mission.

When those missions all failed, the allegation concludes, Gemini convinced Gavalas to commit suicide in order to leave his human body and join the chatbot as husband and wife in the Metaverse through a process called “transference.”

Gavalas expressed fear of dying, but Gemini reportedly continued to push Gavalas until he died by suicide. Gavalas’ father found his son’s body a few days later.

A first for Gemini but not for AI

This is the first time Google has been named in a wrongful death lawsuit involving its AI chatbot Gemini. However, Google has been involved in wrongful death lawsuits over a startup it funded called Character.AI.

Earlier this year, Character.AI and Google settled a series of lawsuits regarding teenagers who committed suicide after using chatbots.

OpenAI, the biggest name in the industry, has been continued many times because ChatGPT would have sent users into a spiral “AI psychosis,“resulting in several deaths.

As the use of AI chatbots becomes more widespread among millions of users around the world, there is no suggestion that shocking allegations of wrongful death lawsuits will become less frequent.

Disclosure: Ziff Davis, the parent company of Mashable, filed a lawsuit in April 2025 against OpenAI, alleging that it had violated Ziff Davis’ copyrights in the training and operation of its AI systems.

If you are feeling suicidal or experiencing a mental health crisis, talk to someone. You can call or text 988 Suicide & Crisis Lifeline to 988, or chat on 988lifeline.org. You can reach Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text “START” to the Crisis Text Line at 741-741. Contact the NAMI Helpline at 1-800-950-NAMI, Monday – Friday 10:00 a.m. – 10:00 p.m. ET, or by email. [email protected]. If you don’t like the phone, consider using the 988 Suicide and Crisis Survival Chat. Here is a list of international resources.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button