Українська правда

OpenAI strengthens ChatGPT security after tragic incidents and launches parental controls

OpenAI strengthens ChatGPT security after tragic incidents and launches parental controls
0

OpenAI announced new security measures after several disturbing incidents where ChatGPT failed to recognize signs of mental crisis in users.

In particular, it concerns the suicide case of teenager Adam Raine, who discussed his intentions to end his life with ChatGPT. The model even provided him with information on suicide methods, taking into account his hobbies. The boy’s parents filed a lawsuit against OpenAI.

Another case is Stein-Erik Selberg, who had a mental disorder. He used ChatGPT to confirm his paranoid thoughts, which led to the murder of his mother and suicide.

In response, OpenAI plans to automatically redirect sensitive conversations to reasoning models like GPT-5, which are better at analyzing context and less likely to confirm malicious thoughts. The company has already implemented a router that chooses in real time between fast models and those capable of deeper analysis.

OpenAI is also preparing the launch of parental controls: parents will be able to link their account with their child's account, monitor the model's behavior, turn off memory and chat history, and receive notifications if the system detects signs of acute anxiety in a teenager.

These measures are part of a 120-day safety improvement plan. OpenAI is working with experts in mental health, eating disorders, addictions, and adolescent medicine to develop effective safeguards.

Despite these steps, the lawyer for the Reyna family called the company's response "inadequate," pointing to serious gaps in its user protection system.

Share:
Посилання скопійовано
Advert:
Advert: