Back

OpenAI Adds Trusted Contact Feature to ChatGPT for Adult Users

OpenAI began offering a Trusted Contact option to adult users of ChatGPT this week, extending the safety toolkit that already covers teen accounts. The feature appears in the app’s settings and lets a user nominate a single person—at least 18 years old (19 in South Korea)—who will be notified if the chatbot flags a conversation as potentially indicating self‑harm.

Setting up the contact is optional. Once a user selects a nominee, the app sends the contact an invitation that explains the role and offers a one‑week window to accept. If the invitation is declined, the user can choose someone else. The process does not share any part of the conversation; the alert simply states that self‑harm was mentioned in a concerning way and asks the contact to check in.

When ChatGPT’s algorithms detect language that may signal a serious risk, the system first informs the user that a Trusted Contact could be notified. It also suggests conversation starters to help the user reach out directly. A small team of specially trained human reviewers then evaluates the situation. If they confirm a genuine threat, the contact receives a notification via email, text message, or an in‑app alert. OpenAI aims to complete this human review within an hour.

The Trusted Contact feature builds on OpenAI’s broader safety efforts, which include alerts for linked teen accounts when signs of distress appear. Development involved clinicians, researchers, and mental‑health organizations such as the American Psychological Association. OpenAI stresses that the new tool does not replace crisis hotlines, emergency services, or professional care; the chatbot continues to direct users to those resources when needed.

Users retain full control over the feature. They can remove or replace their Trusted Contact at any time, and contacts can opt out themselves. By giving users a way to involve a trusted person, OpenAI hopes to mitigate the limits of AI‑driven conversation when dealing with deeply personal issues.

Industry observers note that the addition reflects a growing trend among AI providers to embed human‑in‑the‑loop safeguards. As AI chatbots become more ingrained in daily life, platforms are under pressure to address potential harms without compromising user privacy. OpenAI’s approach—combining algorithmic detection, rapid human review, and minimal data sharing—offers a model that balances safety with confidentiality.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Digital Trends

Also available in: