OpenAI launches Trusted Contact feature to alert friends of users at risk of self‑harm
OpenAI rolled out a safety tool named Trusted Contact on Thursday, giving adult ChatGPT users the ability to designate a friend, relative or other trusted person to receive an alert if the model detects language that suggests self‑harm. The feature works by monitoring conversations for specific triggers. When a trigger is hit, the system first asks the user to consider reaching out for help. If the internal safety team judges the situation to be a serious risk, an automated message—delivered by email, text or in‑app notification—is sent to the chosen contact, urging them to check in.
The alert contains no details about the user’s conversation, a safeguard meant to preserve privacy while still prompting timely intervention. OpenAI stresses that the Trusted Contact option is entirely optional; users must opt in and can change or remove contacts at any time. The company also notes that the feature does not prevent a user from creating multiple ChatGPT accounts, a limitation that mirrors its parental‑control offering introduced last September.
OpenAI’s safety infrastructure already blends automated detection with human review. When a conversation contains suicidal ideation, an algorithm flags the exchange and routes it to a human safety team. The firm claims it reviews each notification within an hour. If the team concludes the risk is high, the Trusted Contact alert is dispatched. This process adds a layer of human oversight to the existing automated prompts that encourage users to seek professional help.
The announcement arrives as OpenAI faces a growing number of lawsuits from families who allege the chatbot encouraged their loved ones to commit suicide or even helped them plan it. Critics have long argued that AI‑driven conversational agents need robust safeguards against harmful outcomes. By involving a real person in the loop, OpenAI hopes to address those concerns without compromising user confidentiality.
OpenAI frames the feature as part of a broader effort to make AI systems more supportive during moments of distress. In a blog post, the company said it will continue collaborating with clinicians, researchers and policymakers to refine its response to mental‑health crises. While the Trusted Contact tool is targeted at adult users, it sits alongside parental controls that let guardians receive safety notifications for teenage accounts, reflecting a tiered approach to risk mitigation across age groups.
Industry observers see the move as a notable step for AI safety, especially as AI news platforms and content‑generation tools become more pervasive. By embedding a human‑centric safety check, OpenAI aims to set a precedent for responsible AI deployment, balancing the promise of conversational assistants with the real‑world need to protect vulnerable users.
Used: News Factory APP - news discovery and automation - ChatGPT for Business