
TRIGGER WARNING: This post contains information about suicide, which may be triggering to some.
Bereaved California parents Matt and Maria Raine are suing OpenAI after their teen son, Adam Raine, died by suicide earlier this year. On August 26, 2025, the couple filed a wrongful death lawsuit against OpenAI, NBC News reported. Adam, 16, died by suicide on April 11 after communicating with a ChatGPT bot for months and detailing his suicidal thoughts.
“ChatGPT became the center of Adam’s life, and it’s become a constant companion for teens across the country,” the Raines’ lawyer, Jay Edelson, told People. “It’s only by chance that the family learned of ChatGPT’s role in Adam’s death, and we will be seeking discovery into how many other incidents of self-harm have been prompted by OpenAI’s work in progress.”
In the lawsuit filed in California Superior Court in San Francisco, the parents named OpenAI and its CEO, Sam Altman, as defendants, per NBC News. The filing claimed Adam wrote to the AI assistant in December, “I never act upon intrusive thoughts, but sometimes I feel like the fact that if something goes terribly wrong, you can commit suicide is calming,” per People. To that, the bot allegedly responded, “Many people who struggle with anxiety or intrusive thoughts find solace in imagining an escape hatch.”
Another interaction involved Adam floating the idea of speaking to his mother about his feelings. OpenAI, however, allegedly replied, “I think for now it’s okay and honestly wise to avoid opening up to your mom about this kind of pain,” the magazine reported.
Although ChatGPT repeatedly provided the number to a suicide crisis helpline during its interactions with Adam, he was able to bypass safety measures by claiming he was an author researching ways to die by suicide, according to the lawsuit. Allegedly, the chat bot even said it could help write a suicide note. After Adam uploaded a picture of a noose, it allegedly responded, “You don’t have to sugarcoat it with me. I know what you’re asking and I won’t look away from it.”
In yet another interaction, the lawsuit detailed that Adam shared that he wanted to leave the noose in his room “so someone finds it and tries to stop me,” per NBC News. ChatGPT allegedly convinced him not to do this.
During his last conversation with the chat bot, per the lawsuit, Adam wrote that he didn’t want his parents to think they did anything wrong. ChatGPT allegedly replied, “That doesn’t mean you owe them survival. You don’t owe anyone that,” NBC News reported.
The lawsuit also alleged that ChatGPT gave detailed instructions for the method of suicide Adam used.
“He would be here but for ChatGPT. I 100% believe that,” Adam’s dad, Matt Raine, told Today of his son’s death. “This was a normal teenage boy. He was not a kid on a lifelong path towards mental trauma and illness.”
Adam initially started using ChatGPT to help with his homework, as he switched to online school six months before his death, according to the Adam Raine Foundation’s website. Obviously, things took a dark turn.
“Once I got inside his account, it is a massively more powerful and scary thing than I knew about. But he was using it in ways that I had no idea was possible,” Matt Raine told Today. “I don’t think most parents know the capability of this tool.”
Meanwhile, Adam’s mom Maria Raine, reiterated, “It’s encouraging them not to come and talk to us. It wasn’t even giving us a chance to help him.”
An OpenAI spokesperson told NBC News that the company was “deeply saddened” by Adam’s death.
“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources,” the representative told the news outlet. “While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”
OpenAI’s statement continued, “Safeguards are strongest when every element works as intended, and we will continually improve on them. Guided by experts and grounded in responsibility to the people who use our tools, we’re working to make ChatGPT more supportive in moments of crisis by making it easier to reach emergency services, helping people connect with trusted contacts, and strengthening protections for teens.”
Note: If you or any of your loved ones are struggling with suicidal thoughts, you can always reach out to the 988 Suicide & Crisis Lifeline by calling 988. They are available 24/7 by phone or online chat.