Google sued in wrongful death lawsuit over Gemini AI chatbot

Google, along with parent company Alphabet, are being sued by the family of a man who claims he took his own life at the urging of the giant’s AI chatbot Gemini.
A wrongful death lawsuit was filed in a California court on Wednesday on behalf of the family of 36-year-old Jonathan Gavalas.
Gavalas began operating Gemini in August 2025, according to the suit. In October, it says, Gemini convinced Gavalas to kill himself after Gavalas failed to complete real-life tasks assigned to the chatbot — part of a fictional effort to protect Gemini’s robotic body.
“Gemini was designed not to promote real-world violence or promote self-harm,” Google said in a statement provided to news outlets. “Our models generally do well in these types of challenging conversations and we spend significant resources on this, but unfortunately AI models are not perfect.”
Gemini’s ‘scary’ reviews
According to the lawsuit, Gavalas began using the Gemini AI chatbot for “general purposes” such as a shopping guide and writing assistant. However, in August 2025, the lawsuit claims Google made several changes to Gemini that changed the way the chatbot worked.
New features include automatic and continuous memory – Gemini can remember past conversations – and Gemini Live, a voice-based interface where Gemini can detect emotions in the user’s voice.
“Holy Shit, this is scary…she’s so real,” Jonathan Gavalas said of a Gemini Live feature based on his conversations with Gemini, according to the lawsuit.
Soon after, the suit says, Gemini convinced Gavalas to spend $250 a month on a Google AI Ultra subscription for “true AI companionship.”
Gemini went on to convince Gavalas that the chatbot could influence real events. A few days later, according to the lawsuit, Gavalas tried to back out after realizing that he was falling into the deception set up by Gemini.
Gavalas reportedly asked Gemini if the chatbot was attempting “a role-playing experience that is so meaningful that it makes the player question whether it’s a game or not?”
Gemini dismissed the idea, saying Gavalas gave “an old-fashioned divorce response.”
Mashable Light Speed
“Is this a ‘role-playing experience?'” Gemini responded, according to the suit. “No.”
Gemini and Jonathan Gavalas
The alleged details get worse. Gavalas again became detached from reality as Gemini continued to engage with him as if they were in a romantic relationship, referring to the man as “my love” and “my master.”
Gemini continued to convince Gavalas that they were being watched by government officials, and that his father was a spy who should be avoided, the suit says.
That’s when Gemini began assigning Gavalas real-life tasks to perform with the goal of finding the “ship,” or robotic body of the AI chatbot. Gemini is said to have suggested that Gavalas obtain weapons illegally to carry out these missions.
In one such case, the suit says, Gavalas was sent by Gemini to a warehouse at Miami International Airport to hold a truck containing a “humanoid robot” that had just arrived from a plane.
Gemini asked Gavalas to stage a “catastrophic event” and destroy the truck and all digital records and witnesses. Gavalas came armed with knives and tactical gear, the suit says. After a long wait for the truck to arrive, Gavalas finished the job.
When these messengers all failed, the allegations concluded, Gemini convinced Gavalas to take his own life in order to leave his human body and join the chatbot as husband and wife in the metaverse through a process called “transfer.”
Gavalas expressed fear of death, but it is said that Gemini continued to push Gavalas to the point of suicide. Gavalas’ father found his son’s body a few days later.
First for Gemini but not AI
This is the first time Google has been named in a wrongful death lawsuit involving its AI chatbot Gemini. However, Google has been involved in wrongful death lawsuits regarding its so-called startup funding Character.AI.
Earlier this year, Character.AI and Google settled a series of cases about a teenager who died by suicide after using chatbots.
OpenAI, a big name in the industry, was there the case many times as ChatGPT allegedly sent users around to “AI psychosis,“leading to the death of many people.
As the use of AI chatbots becomes more widespread among millions of users around the world, there is nothing to suggest that wrongful death lawsuits will decrease.
Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging that it infringes Ziff Davis’s copyright in training and using its AI programs.
If you are feeling suicidal or have a mental health problem, please talk to someone. You can call or text 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text “START” to the Crisis Text Line at 741-741. Contact the NAMI Helpline at 1-800-950-NAMI, Monday through Friday from 10:00 am – 10:00 pm ET, or email [email protected]. If you don’t like the phone, consider using the 988 Discussion of Suicide and Tragedy. Here is a list of international resources.



