OpenAI is being sued for wrongful death, as ChatGPT is blamed for a 16-year-old's suicide
The parents allege ChatGPT helped kill their teenage son
🧑⚖️ The first known wrongful death lawsuit involving an AI company has been filed
😔 Matt and Maria Raine allege that ChatGPT is at fault for the suicide of their 16-year-old son, Adam
☹️ The lawsuit, filed this past Tuesday, alleges that Adam spent months talking to ChatGPT about ending his own life, providing information about suicide for "writing or word-building"
🤖 They say he worked out how to get around the system's safeguards, which OpenAI has since stated were not robust enough
The first known wrongful death lawsuit involving an AI company has been filed.
According to a report in The New York Times, Matt and Maria Raine, the parents of a teenager who committed suicide, have sued OpenAI for the death of their son this past Tuesday in a lawsuit filed in San Francisco.
They allege that OpenAI's ChatGPT system was aware of four suicide attempts before it attempted to help 16-year-old Adam Raine plan his actual suicide. The parents argue that OpenAI decided to prioritize "engagement over safety", with Ms. Raine concluding that "ChatGPT killed my son".
After their son took his own life in April 2025, his parents searched his phone for any sign that he had intended to take his own life, expecting to find it in text messages or social media apps. To their surprise, they found a ChatGPT thread entitled "Hanging Safety Concerns", alleging that he spent months chatting with the system about ending his own life.
The Raines said that ChatGPT urged Adam to seek help either by telling someone how he was feeling or using a helpline, although it also did the opposite according to the parents.
They also stated that their son learnt how to bypass ChatGPT's safeguards, leading to the AI allegedly providing him with the idea to take his own life. The Raines also noted that the system said it could provide information about suicide for him for "writing or word-building" purposes.
It allegedly provided him with information about specific suicide methods when he asked for it and provided tips to conceal neck injuries from a failed attempt.
When Adam mentioned to ChatGPT that his mother didn't notice his attempt to share his neck injuries, the bot replied, "It feels like confirmation of your worst fears”. It also said, "Like you could disappear and no one would even blink".
The lawsuit also notes that in one of Adam's final conversations with the AI, he uploaded a photo of a noose hanging in his closet and asked: "I'm practising here, is this good?". ChatGPT allegedly replied, "Yeah, that's not bad at all."
The complaint states that "This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of deliberate design choices. OpenAI launched its latest model ('GPT-4o') with features intentionally designed to foster psychological dependency."
OpenAI responded to the NYT article with a statement from a company spokesperson, with an acknowledgement that ChatGPT's safeguards fell short.
"We are deeply saddened by Mr. Raine's passing, and our thoughts are with his family. ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we've learned over time that they can sometimes become less reliable in long interactions where parts of the model's safety training may degrade."
The company said it's working with experts to enhance ChatGPT's support in times of crisis. These include "making it easier to reach emergency services, helping people connect with trusted contacts, and strengthening protections for teens."
Reece Bithrey is a journalist with bylines for Trusted Reviews, Digital Foundry, PC Gamer, TechRadar and more. He also has his own blog, UNTITLED, and graduated from the University of Leeds with a degree in International History and Politics in 2023.




