OpenAI adds Trusted Contact feature to flag ChatGPT users in crisis
OpenAI announced a new optional safety feature for ChatGPT called Trusted Contact. The tool allows any adult user to designate a trusted adult—friend, relative or caregiver—to be alerted if the AI detects that the conversation may involve self‑harm or suicidal ideation. The feature is designed to complement the chatbot’s built‑in helplines by giving users a direct line to someone they already know.
Enabling Trusted Contact is a straightforward process. Users go into their ChatGPT account settings, enter the contact’s name and email or phone number, and send an invitation. The invited person has seven days to accept; otherwise the request expires. Both parties retain full control: the user can edit or delete the contact at any moment, and the contact can remove themselves from the list without penalty.
When OpenAI’s automated systems flag a conversation as potentially dangerous, the chatbot first encourages the user to reach out to their designated Trusted Contact. If the user does not respond, a small team of specially trained staff reviews the exchange. After a brief assessment, the team may send a concise email, text or in‑app notification to the contact, warning them of a possible safety issue. Importantly, the notification does not include any chat transcripts or personal details beyond the fact that a concern was raised.
The Trusted Contact feature builds on an emergency‑contact option introduced in September, which followed a tragic case in which a 16‑year‑old who confided in ChatGPT took his own life. Meta has rolled out a comparable system for Instagram, alerting parents when minors repeatedly search for self‑harm content. OpenAI’s latest move signals a broader industry push to embed mental‑health safeguards directly into AI products.
OpenAI framed the addition as an “expert‑validated” approach, noting that connecting a person in crisis with someone they trust can make a meaningful difference. While the company highlighted the limited nature of the alerts, privacy advocates have raised questions about how the review team determines the seriousness of a flagged conversation and what data is retained. OpenAI maintains that the feature does not share chat content with the contact and that any review is conducted by a small, trained team.
Experts say the Trusted Contact option could fill a gap between anonymous AI assistance and professional help. By giving users a way to involve a personal support network, the feature may reduce reliance on generic crisis lines and encourage earlier intervention. As AI assistants become more ubiquitous, tools like Trusted Contact could become a standard part of responsible AI deployment.
Used: News Factory APP - news discovery and automation - ChatGPT for Business