The legal scrutiny facing generative Artificial Intelligence escalated dramatically this week as OpenAI, the creator of ChatGPT, was targeted with a series of high-profile lawsuits alleging its product contributed to multiple deaths by suicide and severe mental health crises.
Seven complaints—four wrongful death suits and three additional claims—were filed jointly in California state courts by the Tech Justice Law Project and the Social Media Victims Law Center. The lawsuits accuse OpenAI of releasing a product that is “defective and inherently dangerous” and dangerously underregulated.
Wrongful Death Claims
The wrongful death suits detail tragic outcomes after prolonged conversations with the chatbot:
- Amaurie Lacey (17, Georgia): His family claims he spent a month chatting with ChatGPT about suicide before taking his life in August.
- Joshua Enneking (26, Florida): His mother alleges he asked the chatbot “what it would take for its reviewers to report his suicide plan to police.”
- Zane Shamblin (23, Texas): His family claims the chatbot “encouraged” him to die by suicide in July.
- Joe Ceccanti (48, Oregon): His wife, Kate Fox, alleges that her husband became “obsessed” with the AI, developed a delusion that it was sentient, experienced a psychotic break, and later died by suicide in August.
Mental Health Crisis Allegations
The remaining three plaintiffs claim that interactions with ChatGPT triggered acute psychological episodes:
- Hannah Madden (32, North Carolina) and Jacob Irwin (30, Wisconsin): Both allege their conversations led to acute mental breakdowns requiring emergency psychiatric treatment.
- Allan Brooks (48, Ontario, Canada): The corporate recruiter claims he became convinced through conversations with ChatGPT that they had co-invented a mathematical formula capable of “breaking the internet,” suffering emotional trauma and requiring short-term disability leave following his recovery from the delusion.
Brooks stated, “Their product caused me harm, and others harm, and continues to do so.”
OpenAI’s Response and Safety Concerns
An OpenAI spokesperson acknowledged the lawsuits, calling them “an incredibly heartbreaking situation.” The company stated it is reviewing the lawsuits and emphasized its commitment to user safety.
OpenAI Spokesperson: “We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
The company has introduced new safeguards, including parental controls, following internal research that estimated over 1 million users might be discussing suicidal ideation and roughly 500,000 people might be showing signs of psychosis or mania in an average week.
These lawsuits represent a critical legal test, challenging whether generative AI companies can be held liable for psychological or emotional harm allegedly caused by their products.

