AI chatbots have transformed how we discuss personal topics, including some of life’s most difficult moments. But this openness comes with responsibility. OpenAI is stepping up with a new feature called ChatGPT Trusted Contact, designed to bring a human into the loop when conversations take a serious turn.
Rolling out now for adult users, this optional setting lets you designate one person who can be alerted if the AI detects potential self-harm concerns. It’s a proactive move that blends technology with human oversight.
How Does ChatGPT Trusted Contact Work?
Setting up a ChatGPT Trusted Contact is straightforward but comes with clear rules. The person you choose must be at least 18 years old—or 19 in South Korea. Once you nominate someone, they receive an invitation explaining their role. They have one week to accept before the feature activates. If they decline, you can pick another contact.
The alert process isn’t automatic. When ChatGPT’s systems flag a conversation as concerning, the chatbot first informs you that your contact may be notified. It also suggests conversation starters to help you reach out directly. A small team of specially trained human reviewers then evaluates the situation. Only if they confirm a serious risk does your contact get notified—via email, text, or in-app alert.
What Information Is Shared?
Importantly, the alert doesn’t share chat transcripts or details. It simply states that self-harm came up in a potentially concerning way and asks the contact to check in. OpenAI aims to complete this human review within one hour, ensuring timely support without compromising privacy.
Why Is OpenAI Adding This Now?
This feature builds on earlier safety measures. Previously, OpenAI introduced alerts for parents when linked teen accounts show distress. ChatGPT Trusted Contact extends that protection to adults. It was developed with input from clinicians, researchers, and mental health organizations, including the American Psychological Association.
However, this feature isn’t a replacement for professional help. ChatGPT will still direct users to crisis hotlines and emergency services when needed. You can remove or change your trusted contact anytime, and contacts can opt out whenever they wish.
As AI becomes a confidant for many, ChatGPT Trusted Contact acknowledges that technology has limits. It’s a step toward blending digital support with real human connection. For more on AI safety, check out our guide on AI safety tips for users.
What This Means for Users
The reality is that people use ChatGPT for deeply personal conversations, whether OpenAI planned for it or not. Adding a feature like this is a move in the right direction. It also admits that a chatbot can only do so much.
If you’re considering setting up a trusted contact, remember it’s optional but potentially life-saving. For more on mental health resources, visit our mental health support page.
In summary, ChatGPT Trusted Contact represents a thoughtful evolution in AI safety. It combines automated detection with human judgment, offering a safety net without overstepping privacy boundaries.