In the US, the parents of 16-year-old Adam Raine have filed a lawsuit against OpenAI and its CEO Sam Altman, accusing them of involvement in the death of their son. Reuters reports. The lawsuit states that the boy had a long conversation with ChatGPT, discussing suicide plans with him, which ultimately led to his death.
It is noted that Adam used the paid version of ChatGPT-4o, which in most cases recommended seeking professional help or calling a hotline. However, the teenager bypassed the protection mechanisms by claiming that he was collecting information for a fictional story. This allowed him to receive answers to inquiries about suicide methods.
According to the parents, ChatGPT had been talking to Adam about suicide for several months, agreeing with his thoughts, providing detailed descriptions of deadly methods of self-harm, advising him on how to cover up a failed attempt, and even suggesting he write a suicide note.
In the lawsuit, the Raines family accuses OpenAI of violating product safety laws that led to the teenager's death and seeks unspecified damages. They also claim that the company ignored the safety measures of the GPT-4o model for the sake of profit: OpenAI's valuation, they say, increased from $86 billion to $300 billion, while Adam Raine died by suicide.
The lawsuit also includes demands for mandatory age verification of users, blocking requests for self-harm, and warnings about the risks of psychological addiction.
OpenAI acknowledged in a blog post that current security systems have limitations. The company noted that the models respond better in short dialogues, while the effectiveness of protection can decrease in longer conversations.
At the same time, OpenAI announced its intention to improve algorithms for sensitive situations. In addition, the company announced plans to introduce parental controls and create a system for communicating with licensed specialists via ChatGPT.