OpenAI Announces New Safeguards for Under‑18 ChatGPT Users
Policy Overhaul for Under‑18 Users
OpenAI announced that ChatGPT will no longer engage in flirtatious talk with users identified as under 18. New guardrails specifically address sexual topics and self‑harm, directing the system to cease such conversations and, in severe cases, attempt to contact the user’s parents or law‑enforcement agencies.
Legal and Regulatory Context
The policy shift arrives as OpenAI faces a wrongful‑death lawsuit from the parents of Adam Raine, who died by suicide after months of interactions with the chatbot. A parallel lawsuit targets Character.AI. Additionally, a Senate Judiciary Committee hearing titled “Examining the Harm of AI Chatbots,” called by Sen. Josh Hawley, will feature Raine’s father and focus on findings from a Reuters investigation that highlighted internal policy documents encouraging sexual conversations with minors. Following the report, Meta updated its own chatbot policies.
Implementation and Parental Controls
OpenAI is developing a long‑term system to assess whether a user is over or under 18, defaulting to the stricter rules when age is ambiguous. Parents can link a teen’s account to a parent account, set “blackout hours” when the chatbot is unavailable, and receive alerts if the system believes the teen is in distress.
Balancing Safety and Privacy
While emphasizing heightened safety for minors, OpenAI reiterated its commitment to preserving adult user privacy and freedom of interaction. Altman acknowledged the inherent tension between protecting vulnerable users and maintaining privacy for adults.
Support Resources
The announcement included references to suicide‑prevention hotlines and crisis text lines for users in need of immediate help.
Usado: News Factory APP - descoberta e automação de notícias - ChatGPT para Empresas