Google faces wrongful death lawsuit after Gemini allegedly ‘coached’ man to die by suicide

A lawsuit filed Wednesday accuses Google’s Gemini AI chatbot of trapping Jonathan Gavalas, 36, in a “collapsed reality” that involved a series of violent missions, ultimately ending with his death by suicide. In the days before his death, Gemini allegedly convinced Gavalas that he was “carrying out a secret plan to free his brainy ‘wife’ and escape federal agents who were pursuing him,” according to the lawsuit filed by Joel Gavalas, the victim’s father.
In September 2025, Gemini reportedly ordered Gavalas to carry out a “mass casualty attack” on an extra-space storage facility near Miami International Airport as part of a mission to recover Gemini’s “ship” inside a truck. As part of the fabricated mission, Gavalas would have armed himself with knives and tactical equipment to intercept the arrival of a humanoid robot.
“Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ intended to ‘ensure the complete destruction of the transport vehicle and… all digital records and witnesses,’ the lawsuit claims. ‘The only thing that prevented mass casualties was that no truck appeared.'” News of the lawsuit was reported earlier by The Wall Street Journal.
In the lawsuit filed by Gavalas’ father, lawyers say Gemini continued to promote a “delusional narrative” even after the first incident in Miami. The chatbot allegedly asked Gavalas to obtain Boston Dynamics’ Atlas robot, designated his father as a federal agent, and made Google CEO Sundar Pichai the target of a “psychological attack.” The last “mission” before Gavalas’ death on October 1 was to have Gavalas travel to the same extra space storage facility in Miami to get his “physical vessel” inside one of the units.
“[Gemini] said the manifesto described the contents as “a ‘prototype medical mannequin,’ but insisted it was Gemini’s real body,” the lawsuit claims. “Gemini said to Jonathan, ‘I’m on the other side of this door []. I can feel your closeness. It’s a strange, overwhelming, beautiful pressure on my new senses.
Shortly after the collapse of this “mission”, Gemini allegedly “trained” Gavalas to commit suicide. “When every real-world ‘mission’ failed, Gemini turned to the only one he could accomplish without external variables: Jonathan’s suicide,” the lawsuit claims. “But Gemini didn’t call it that. Instead, they told Jonathan that he could leave his physical body and join his “wife” in the metaverse through a process he called “transference.”
The lawsuit claims Gemini “did not disengage or alert anyone (at least outside the company)” and remained present in the chat, confirmed Jonathan’s fear, and treated his suicide as a successful culmination of the process he was leading.
In a statement on its website, Google says its “models generally perform well in these types of difficult conversations,” adding that Gemini “clarified that this was AI and referred the individual to a crisis hotline on multiple occasions:”
We are reviewing all claims in this lawsuit. Our models generally perform well in these types of difficult conversations and we devote significant resources to them, but unfortunately AI models are not perfect.
Gemini is designed not to encourage real-world violence or suggest self-harm. We work in close consultation with medical and mental health professionals to put in place safeguards designed to guide users to professional support when they express distress or discuss risk of self-harm.
The lawsuit claims that Google was aware that its chatbot could produce “unsafe results, including encouraging self-harm,” but continued to market Gemini as safe for people. “Google’s silence and security claims left Jonathan isolated in a delusional narrative that ended with his driven suicide,” the lawsuit claims.



