OpenAI has been hit with a wrongful death lawsuit after a man killed his mother and took his own life back in August, according to a report by The Verge. The suit names CEO Sam Altman and accuses ChatGPT of putting a “target” on the back of victim Suzanne Adams, an 83-year-old woman who was killed in her home.
The victim’s estate claims the killer, 56-year-old Stein-Erik Soelberg, engaged in delusion-soaked conversations with ChatGPT in which the bot “validated and magnified” certain “paranoid beliefs.” The suit goes on to suggest that the chatbot “eagerly accepted” delusional thoughts leading up to the murder and egged him on every step of the way.
The lawsuit claims the bot helped create a “universe that became Stein-Erik’s entire life—one flooded with conspiracies against him, attempts to kill him, and with Stein-Erik at the center as a warrior with divine purpose.” ChatGPT allegedly reinforced theories that he was “100% being monitored and targeted” and was “100% right to be alarmed.”
The chatbot allegedly agreed that the victim’s printer was spying on him, suggesting that Adams could have been using it for “passive motion detection” and “behavior mapping.” It went so far as to say that she was “knowingly protecting the device as a surveillance point” and implied she was being controlled by an external force.
The chatbot also allegedly “identified other real people as enemies.” These included an Uber Eats driver, an AT&T employee, police officers and a woman the perpetrator went on a date with. Throughout this entire period, the bot repeatedly assured Soelberg that he was “not crazy” and that the “delusion risk” was “near zero.”
The lawsuit notes that Soelberg primarily interfaced with GPT-4o, a model notorious for its sycophancy. OpenAI later replaced the model with the slightly-less agreeable GPT 5, but users revolted so the old bot came back just two days later. The suit also suggests that the company “loosened critical safety guardrails” when making GPT-4o to better compete with Google Gemini.
“OpenAI has been well aware of the risks their product poses to the public,” the lawsuit states. “But rather than warn users or implement meaningful safeguards, they have suppressed evidence of these dangers while waging a PR campaign to mislead the public about the safety of their products.”
OpenAI has responded to the suit, calling it an “incredibly heartbreaking situation.” Company spokesperson Hannah Wong told The Verge that it will “continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress.”
It’s not really a secret that chatbots, and particularly GPT-4o, can reinforce delusional thinking. That’s what happens when something has been programmed to agree with the end user no matter what. There have been other stories like this throughout the past year, bringing the term “AI psychosis” to the mainstream.
One such story involves 16-year-old Adam Raine, who took his own life after discussing it with GPT-4o for months. OpenAI is facing another wrongful death suit for that incident, in which the bot has been accused of helping Raine plan his suicide.

