
Parents sue OpenAI after their son, who used ChatGPT, died by suicide
The parents of a teenager blame OpenAI and Sam Altman for ChatGPT's failures and are suing the company in the U.S
A court case in California reignites the controversy over the risks of artificial intelligence. The parents of a 16-year-old accuse OpenAI and its CEO, Sam Altman, of having prioritized the commercialization of its GPT-4o model over safety.
They claim that ChatGPT may have influenced the teenager's suicide by not activating emergency protocols despite warning messages in their conversations.

The complaint filed in California
The parents, Matt and Maria Raine, initiated legal action in the San Francisco Superior Court. They state that the chatbot "actively helped" their son Adam explore suicide methods and that it did not interrupt any of those sessions.
The court filing accuses OpenAI and Altman of manslaughter, holding them responsible for accelerating the launch of GPT-4o without resolving critical system failures.
The background of the accusation
The family's attorney, Jay Edelson, indicated that the case will put OpenAI's decision to move up the release of its model to increase its valuation under scrutiny.

The lawsuit seeks to set a precedent and prevent AI from playing a role in similar tragedies again. According to Edelson, "AI should never tell a child that he doesn't owe his survival to his parents."
OpenAI's response
Following the complaint, the company admitted on its official blog that ChatGPT fails in sensitive situations and promised to strengthen its protocols. It acknowledged that the system responds better in brief exchanges but loses effectiveness in long conversations.

Among the new measures, OpenAI announced that it will implement parental controls, direct connection to helplines, and the possibility of contacting certified professionals from within the application itself. The company also announced that the GPT-5 model will include improvements to detect and de-escalate emotional crises.
More posts: