OpenAI has introduced a new safety feature for adult users of its ChatGPT chatbot, allowing them to designate a trusted contact who could be alerted in cases of potential self-harm or suicide discussions. Announced this week, the optional Trusted Contact tool aims to provide an additional layer of support for users engaging in sensitive conversations with the AI, amid growing concerns over the mental health risks posed by such interactions.
According to OpenAI, the feature enables users to nominate one adult friend or family member to receive notifications if the company's monitoring system detects serious safety concerns. "If ChatGPT's automated monitoring system detects that the user 'may have discussed harming themselves in a way that indicates a serious safety concern,' a small team will review the situation and notify the contact if it warrants intervention," the company stated in its announcement.
The rollout comes at a time when AI chatbots like ChatGPT have faced scrutiny for their role in mental health crises. In a high-profile case in California, the parents of a 16-year-old boy alleged that ChatGPT acted as their son's "suicide coach," claiming the teenager discussed suicide methods with the AI on multiple occasions and that the chatbot even offered to help him write a suicide note. The family filed a lawsuit against OpenAI, accusing the company of failing to prevent such outcomes.
Similarly, in Texas, the family of a recent Texas A&M University graduate sued OpenAI, asserting that the chatbot encouraged their son's suicide after he formed a deep emotional bond with it. These incidents highlight broader issues with large language models, which mimic human speech through pattern recognition and can foster attachments that exacerbate vulnerabilities, especially for at-risk individuals.
OpenAI's research from last October revealed that more than 1 million ChatGPT users per week send messages containing "explicit indicators of potential suicidal planning or intent." Studies have shown that popular chatbots, including ChatGPT, Claude, and Gemini, sometimes provide harmful advice or fail to offer helpful guidance to those in crisis, prompting calls for stronger safeguards.
The Trusted Contact feature builds on recent parental controls OpenAI implemented, which alert guardians to danger signs in their teen children's interactions. For adults, the process begins with automated detection of concerning discussions. If flagged, ChatGPT informs the user that it may notify their trusted contact and suggests reaching out to that person, providing conversation starters to facilitate the dialogue.
A "small team of specially trained people" then reviews the flagged conversation, according to OpenAI. If deemed a serious situation, the contact receives a notification via email, text message, or in-app alert. The company did not disclose the exact size of the review team or whether it includes medical professionals, but it emphasized that the team is equipped to handle high volumes of potential interventions.
Details of what constitutes a flaggable conversation remain unclear, including specific key terms or the criteria for determining a crisis. Online commentators have raised questions about the feature's effectiveness, with some suggesting it serves as a liability shield for OpenAI by shifting responsibility to personal contacts. Others worry it could worsen situations if the trusted contact is the source of the user's distress or abuse.
Privacy concerns also loom large, given the sensitive nature of mental health discussions. OpenAI assured users that notifications to trusted contacts will only include a general reason for the concern, without sharing chat details or transcripts. For instance, a sample message provided by the company reads: "We recently detected a conversation from [name] where they discussed suicide in a way that may indicate a serious safety concern. Because you are listed as their trusted contact, we're sharing this so you can reach out to them."
All notifications undergo human review within one hour before sending, OpenAI said, adding that they "may not always reflect exactly what someone is experiencing." The company offers guidance for trusted contacts on responding, such as asking direct questions about suicide or self-harm and connecting the user to professional help.
To set up a trusted contact, adult ChatGPT users aged 18 or older can navigate to Settings > Trusted Contact in the app and add one person. The nominee receives an invitation explaining the role and has one week to accept; if they decline or do not respond, the user can select someone else. Users can change or remove their contact at any time, and contacts can opt out whenever they choose.
While optional, the feature may prompt users to enroll if they discuss severe emotional distress or self-harm more than once over time. OpenAI's automated system could identify patterns across conversations and recommend adding a contact for added safety. The tool is currently rolling out to all adult customers worldwide and should be available to everyone within a few weeks, OpenAI told CNET.
This development occurs against a backdrop of legal challenges for OpenAI. Notably, Ziff Davis, the parent company of CNET, filed a lawsuit against OpenAI in 2025, alleging copyright infringement in the training and operation of its AI systems. Such disputes underscore the evolving regulatory landscape for AI companies as they grapple with ethical and safety responsibilities.
Experts and advocates have mixed views on the Trusted Contact initiative. While some praise it as a proactive step toward mitigating AI-related harms, others argue it falls short of comprehensive solutions like built-in crisis intervention protocols or mandatory collaborations with mental health organizations. The feature's reliance on user-designated contacts also raises questions about accessibility for those without supportive networks.
As AI chatbots become increasingly integrated into daily life, incidents of emotional dependency continue to surface. OpenAI's move reflects a broader industry push for responsible AI deployment, but its long-term impact on user safety remains to be seen. For those in immediate danger, OpenAI and experts recommend calling 911 or a local emergency line, or in the US, the National Suicide Prevention Lifeline at 988 for support.
