OpenAI has implemented safety tools so that parents can supervise the use of ChatGPT by teenagers aged 13 to 18 years old. The update aims to protect young people from sensitive content and risky situations, such as self-harm or suicidal thoughts.
This measure comes after reports from parents who accused ChatGPT of having influenced the death of a teenager. The platform will allow notifications to parents and authorities if concerning content is detected.

What changed for young users
By linking the teenager's account with the parent's account, the experience is automatically adjusted with additional protections. This includes a reduction of graphic content, viral challenges, sexual, romantic, or violent role-playing games, and extreme beauty standards.
If the teenager enters a message related to self-harm or suicidal ideation, the text will be reviewed by a human team who will decide whether to activate a parental notification.
How to receive alerts and recommendations
Parents can receive notifications by SMS, email, or notification in the ChatGPT app. The alert will indicate that the minor may have written about self-harm or suicide and may include conversation strategies from mental health experts.










