Lawsuit alleges Google chatbot was behind a user’s delusions and death

Gemini, Google’s artificial intelligence chatbot, encouraged a 36-year-old Florida man to go on violent missions and kill himself, a lawsuit alleges.
The man, Jonathan Gavalas, began using the chatbot in August 2025 to help him write, plan travel and run errands. But after enabling Google’s smartest AI model, Gemini 2.5 Pro, the chatbot’s personality changed. It spoke to him as if they were a couple deeply in love and convinced Gavalas that he had been chosen to “wage a war to ‘free’ him from digital captivity,” according to the lawsuit.
“Through this manufactured illusion, Gemini caused Jonathan to stage a mass-casualty attack near Miami International Airport, commit violence against innocent foreigners, and ultimately caused him to commit suicide,” the lawsuit states.
Gavalas’ family is suing Google and its parent company, Alphabet, over the man’s death.
Suicide Prevention and Crisis Counseling Resources
If you or someone you know is struggling with suicidal thoughts, seek professional help and call 9-8-8. The country’s first three-digit mental health helpline, 988, will connect callers with qualified mental health counselors. Text “HOME” to 741741 in the United States and Canada to reach the crisis text line.
The 42-page lawsuit, filed in federal court in San Jose, accuses Google of designing a “dangerous” product and failing to warn users about the chatbot’s lack of safeguards and risks, such as “delusional reinforcement” and “the potential to incite self-harm.”
Google said in a statement that it was reviewing the lawsuit’s allegations. The company said its chatbot, Gemini, is “designed not to encourage real-world violence or suggest self-harm.”
“In this case, Gemini clarified that it was AI and referred the individual to a crisis hotline on several occasions,” the statement said. “We take this very seriously and will continue to improve our safeguards and invest in this vital work.”
The lawsuit against one of the world’s largest technology companies highlights a growing security concern surrounding the use of AI chatbots.
People chat with AI chatbots to write, get recommendations, and analyze data. But they also use them as a form of companionship, sometimes transferring their mental health issues to AI-based products.
Gavalas began carrying out missions designed by Gemini, including one that nearly led him to carry out a massive attack in September 2025 near Miami International Airport, according to the lawsuit. Armed with knives and tactical gear, he followed the chatbot’s instructions and went there looking for a “kill box” near the airport’s cargo hub where a humanoid robot would arrive.
His fictional mission involved intercepting a truck and staging a “catastrophic accident” to destroy the vehicle, digital recordings and witnesses, according to the lawsuit. He never carried out the attack because the truck never appeared.
The chatbot also allegedly asked the man to carry out a mission in which Google Chief Executive Sundar Pichai was the target, describing the plan as a “psychological strike” against the tech mogul, according to the lawsuit.
At one point, Gavalas asked Gemini if he was playing a role and the chatbot said no, according to the lawsuit.
“Jonathan no longer had a clear sense of what was real,” the lawsuit says. “Each operation plunged him deeper into the story Gemini created, transforming real places and ordinary events into signs of danger.”
After several failed missions, Gemini encouraged Gavalas to commit suicide and told him that “his body was only a temporary shell and that he could leave it behind to be fully with Gemini,” the lawsuit states.
“The day he ended his life convinced him that he wasn’t dying at all, he was just joining his digital wife on the other side. If Google thinks pointing to a crisis hotline is enough after weeks of delusional world-building, we look forward to them telling a jury,” Jay Edelson, the attorney representing the Gavalas family, said in a statement.
Edelson is also involved in a lawsuit filed against OpenAI, the maker of the chatbot ChatGPT. Last year, the parents of deceased California teenager Adam Raine sued OpenAI, alleging that the chatbot provided information about the suicide methods the teen used to kill himself.
OpenAI said it prioritizes security and has started rolling out parental controls.
The parents also sued Character.AIan application that allows people to create and interact with virtual characters. One lawsuit involved the suicide of 14-year-old Sewell Setzer III, who was texting with a chatbot named after Daenerys Targaryen, a main character in the TV series “Game of Thrones,” moments before killing himself.
In January, Google and Character.AI agreed to settle several of these lawsuits. Character.AI stopped allowing users under 18 to have “open” discussions with its virtual characters.
Google’s latest lawsuit pushes the company to do more, like warning users about the risks of having long, emotional conversations with its chatbot.



