OpenAI launches ChatGPT with parental controls for teens
OpenAI has announced the launch of a special version of ChatGPT for users under 18 with built-in parental controls. In a new blog post, the company’s CEO Sam Altman explained that the innovation is intended to increase protection for teenagers when interacting with the chatbot.
If OpenAI determines that a user is underage, they will automatically be placed in an age-appropriate version of ChatGPT. This mode blocks explicit and sexual content and, in rare cases, the company may contact parents or involve law enforcement if there is a possibility that a teenager may harm themselves.
OpenAI is also working on technology to more accurately determine the age of users, but in cases of uncertainty or incomplete information, ChatGPT will default to the teenage version. In some countries, ChatGPT may ask the user to provide a passport or ID card.
The launch of updated safety tools in ChatGPT is no coincidence. OpenAI shared plans to add parental controls to ChatGPT last month, after the family of a teenager filed a lawsuit against the company, accusing the chatbot of being involved in his suicide.
However, the company has only just provided details about parental controls, which will allow parents to:
- link your account to your teen's via email;
- set a time when the child cannot use the chatbot;
- manage features that should be disabled;
- receive notifications in the event of a teenager's acute distress;
- set parameters for how the chatbot responds to requests.
ChatGPT is currently designed for users ages 13 and up. Altman calls the innovations “complex decisions.” “After consulting with experts, we believe they are the best and want to be transparent about our intentions,” he adds.
The ChatGPT changes also coincide with the Federal Trade Commission (FTC) launching an investigation into tech companies including Alphabet, Meta, OpenAI, xAI and Snap to determine how chatbots may affect children and teens. The agency said it wants to understand what measures the companies are taking to assess the safety of such systems as “digital companions.”