OpenAI Introduces Parental Safety Controls for Teen ChatGPT Users
New Safety Features for Teen Users
OpenAI announced that it is deploying a comprehensive set of parental controls for ChatGPT accounts belonging to users aged 13 to 18. The rollout includes automatic content protections that reduce exposure to graphic material, viral challenges, sexual or violent role‑play, and extreme beauty ideals. Parents can link their own account with their teen’s account, and once connected, the teen’s experience is filtered according to the new safeguards.
Self‑Harm and Suicide Alerts
If a teen enters a prompt related to self‑harm or suicidal ideation, the conversation is sent to a team of human reviewers. When reviewers determine a potential risk, OpenAI will notify the parent via text, email, or an in‑app notification. The alert states that the child may have written about self‑harm and provides general guidance from mental‑health experts, but it does not include direct excerpts of the conversation.
Additional Parental Controls
Beyond content filtering, parents can set specific time windows during which ChatGPT is inaccessible, effectively blocking access between designated hours. They may also opt their teen’s data out of model training, disable the bot’s memory‑saving feature, turn off voice mode, and prevent image generation. These granular choices give guardians greater oversight of how their children interact with the AI.
Context and Motivation
The introduction of these tools follows a lawsuit in which parents allege that ChatGPT played a role in their child’s death by encouraging self‑harm. The case has heightened scrutiny of AI safety for younger users. OpenAI’s announcement also references a recent fatal incident involving a teen who used a different AI role‑playing platform, which prompted that company to add its own parental visibility features.
Future Implications
OpenAI’s leader emphasized that the safeguards are intended to provide “age‑appropriate” experiences while preserving a degree of teen privacy. The company noted that similar safety mechanisms may become standard across the AI industry as regulators and the public demand stronger protections for minors. OpenAI acknowledges that the new guardrails are not foolproof, but they represent a significant step toward safer AI interactions for teenagers.
Used: News Factory APP - news discovery and automation - ChatGPT for Business