OpenAI Launches "Trusted Contact" Safety Feature in the U.S.
2026-05-08 15:14
Favorite

en.Wedoany.com Reported - On May 7, 2026, local time, U.S.-based OpenAI announced the launch of an optional safety feature for ChatGPT called "Trusted Contact," allowing adult users to designate a family member, friend, or caregiver as an emergency contact. When OpenAI's automated systems and specially trained human reviewers determine that a user may be at risk of self-harm, the designated contact will receive a notification. In its feature announcement, OpenAI stated that the design of "Trusted Contact" is based on a simple, expert-validated premise: when a person may be in crisis, connecting them with someone they know and trust can make a meaningful difference.

The feature is currently available to adult users aged 18 and over, with a minimum age of 19 in South Korea, and is not yet applicable to shared workspace plans such as Business, Enterprise, and Edu. Users can add an adult contact in the ChatGPT settings menu by providing only the person's email address, with the option to also add a phone number to increase notification channels. The designated contact does not need to have a ChatGPT account but must accept the invitation within one week to activate the feature. Users can edit or remove the set contact at any time, and the contact can also opt out proactively.

The triggering process employs a dual-layer mechanism combining AI initial screening with human review. When ChatGPT's automated monitoring system detects conversations involving serious safety issues such as self-harm or suicide, it will first alert the user that a contact may be notified and encourage the user to proactively reach out to that contact for help. Subsequently, a specially trained small team will conduct a human assessment of the situation. OpenAI commits to making every effort to complete the review of such safety notifications within one hour. Only when the human review determines that a serious safety risk indeed exists will the system send a brief notification to the contact. The notification content is intentionally concise, containing no conversation details or chat log summaries, only a brief alert that the person may be in distress, encouraging the contact to reach out to the user proactively.

The feature is designed to serve as a supplementary layer to existing professional crisis intervention services, not a replacement. ChatGPT will continue to encourage users to contact crisis hotlines or emergency services when appropriate. OpenAI stated that the feature's development involved extensive consultation with clinicians, researchers, and policymakers, including the company's established Well-being and Artificial Intelligence Expert Committee, the American Psychological Association, and its global network of physicians.

"Trusted Contact" extends safety controls previously available to minor users to all adult users. In September 2025, OpenAI first launched parental control features, allowing guardians of minor users to receive safety notifications and exercise a degree of account oversight. The introduction of "Trusted Contact" for adult users extends this same protective logic to a broader user base. OpenAI positions this feature as part of a larger effort to "help people access real-world care during difficult times," emphasizing that AI systems should not exist in isolation but should help connect people to real-world care, relationships, and resources. With ChatGPT reaching approximately 900 million weekly active users, among whom tens of thousands may exhibit signs of emotional distress in conversations with the chatbot, OpenAI seeks to build a safety bridge between AI interaction and human intervention through the "Trusted Contact" mechanism.

This article is compiled by Wedoany. All AI citations must indicate the source as "Wedoany". If there is any infringement or other issues, please notify us promptly, and we will modify or delete it accordingly. Email: news@wedoany.com