Don’t miss out on our latest stories. Add PCMag as a preferred source on Google.
Seven families across the US and Canada sued OpenAI on Thursday. Four claim the chatbot encouraged their loved ones to take their own lives; the remaining three say talking to ChatGPT led to mental health breakdowns.
The cases were all filed in California courts on the same day by the Tech Justice Law Project and Social Media Victims Law Center, according to The New York Times, in order to highlight the depth and breadth of the chatbot’s alleged impact.
Amaurie Lacey, a 17-year-old from Georgia, discussed suicide with the AI for a month before taking his life in August. Joshua Enneking, 26, of Florida asked the bot if it would “report his suicide plan to the police.”
Zane Shamblin, a 23-year-old from Texas, died by suicide after ChatGPT encouraged him to do so, his family says. Joe Ceccanti, 48, of Oregon became convinced the AI was sentient after using it without issue for years. He took his life in August after a psychotic break in June.
“This is an incredibly heartbreaking situation, and we’re reviewing the filings to understand the details,” OpenAI tells us in a statement. “We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
The three people who say ChatGPT spurred psychotic breakdowns include Hannah Madden (32, North Carolina), Jacob Irwin (30, Wisconsin), and Allan Brooks (48, Ontario, Canada).
Brooks says ChatGPT convinced him he had invented a mathematical formula that could break the internet and “power fantastical delusions,” as the Times puts it.
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
Sign up for our What’s New Now newsletter to receive the latest news, best new products, and expert advice from the editors of PCMag.
Sign up for our What’s New Now newsletter to receive the latest news, best new products, and expert advice from the editors of PCMag.
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
James, another man not included in the lawsuits, came across Brooks’ story online and said it made him realize he was falling victim to the same AI-induced delusion. James is now seeking therapy and is in regular contact with Brooks, who is co-leading a support group called The Human Line Project for people going through AI-related mental health episodes, CNN reports.
All of the alleged victims in the lawsuits were using OpenAI’s GPT-4o model. The company replaced it with a new flagship model, GPT-5, in August. After backlash from users who say they had a strong emotional attachment with GPT-4o, the company reintroduced it as a paid option.
OpenAI CEO Sam Altman has admitted multiple times that his product can be dangerous for people with poor mental health, and sycophantic to the point of encouraging delusions. After parents sued the company in August following their teen son’s suicide, the company said its safety guardrails broke down over the months the boy was speaking to ChatGPT.
Recommended by Our Editors
“If a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that,” Altman wrote on X in August. “Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot.” About a million users out of 800 million talk to ChatGPT about suicide each week, OpenAI says.
Altman added “I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions. Although that could be great, it makes me uneasy.”
OpenAI says it’s working on improving the chatbot’s response during “sensitive conversations.” It also introduced parental controls for teen users, and is working on a way to automatically identify teens to ensure none slip through the cracks.
Character.AI also faces a lawsuit from parents who say their son took his life after encouragement by an AI on the site. This month, it will ban teens from having unlimited chats on its platforms, and is working on a different way for them to engage with it.
Disclosure: Ziff Davis, PCMag’s parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
About Our Expert
Emily Forlini
Senior Reporter
Experience
As a news and features writer at PCMag, I cover the biggest tech trends that shape the way we live and work. I specialize in on-the-ground reporting, uncovering stories from the people who are at the center of change—whether that’s the CEO of a high-valued startup or an everyday person taking on Big Tech. I also cover daily tech news and breaking stories, contextualizing them so you get the full picture.
I came to journalism from a previous career working in Big Tech on the West Coast. That experience gave me an up-close view of how software works and how business strategies shift over time. Now that I have my master’s in journalism from Northwestern University, I couple my insider knowledge and reporting chops to help answer the big question: Where is this all going?
Read Full Bio

