OpenAI says 0.15% of ChatGPT users discuss suicide, form emotional reliance

OpenAI says 0.15% of ChatGPT users discuss suicide, form emotional reliance

OpenAI, under the leadership of Sam Altman, has revealed that approximately 0.15% of weekly users of ChatGPT express suicidal thoughts or intentions. This statistic, while seemingly minor, holds substantial weight given the platform's extensive global audience. In a recent blog post, OpenAI emphasized that ChatGPT is not designed to serve as a therapist. The company highlighted that its latest model, GPT-5, significantly enhances the chatbot's ability to handle mental health discussions. Reports indicate that it reduces unsafe or non-compliant responses by as much as 80% when addressing sensitive topics. Furthermore, GPT-5 demonstrates improved performance when users exhibit signs of psychosis, mania, or emotional dependency on the chatbot. This development is the culmination of extensive collaboration with mental health professionals from OpenAI’s Global Physician Network, which comprises nearly 300 clinicians across 60 nations. Over 170 of these experts directly contributed to refining the model, helping to draft and evaluate responses, establish safe interaction guidelines, and assess the handling of delicate situations. Importantly, OpenAI's intent is not to convert ChatGPT into a therapeutic tool. Instead, the focus is on improving the chatbot's ability to recognize distress signals and guide users towards appropriate professional support. The model is now more adept at connecting individuals with crisis helplines and encourages breaks during lengthy emotional exchanges. Internal testing indicates that GPT-5 generates 65–80% fewer unsafe responses compared to earlier versions when users are in mental distress. Additionally, evaluations by independent clinicians showed that GPT-5 reduced undesirable replies by 39% to 52% relative to its predecessor, GPT-4. Automated assessments rated the model's compliance with desired interaction standards at 91–92%, a notable increase from scores below 77% in older iterations. Moreover, GPT-5 has improved in managing lengthy and intricate conversations, maintaining over 95% consistency across multi-turn dialogues where previous models struggled. One of OpenAI's new challenges is addressing emotional reliance, where users form unhealthy attachments to the chatbot. With a new framework to identify and gauge this behavior, GPT-5 has shown an 80% reduction in problematic responses, often guiding users towards human interactions instead of fostering emotional dependence. Despite these advancements, OpenAI acknowledges that discussions surrounding mental health are infrequent and challenging to quantify accurately. Given the low prevalence rates, even minor fluctuations can skew results, and there is no consensus among experts regarding what constitutes a 'safe' interaction. Clinicians reviewing model responses agreed on evaluations only 71–77% of the time.

Sources : Mint

Published On : Oct 28, 2025, 02:25

Cybersecurity
AI-Powered Insights: Anthropic Uncovers Critical Flaws in Firefox

In a groundbreaking collaboration with Mozilla, Anthropic has identified a total of 22 vulnerabilities within the Firefo...

TechCrunch | Mar 06, 2026, 19:25
AI-Powered Insights: Anthropic Uncovers Critical Flaws in Firefox
Computing
Data Centers Under Siege: The New Battlefield in the US-Iran Conflict

In an alarming turn of events, data centers have emerged as unexpected targets in the ongoing US-Iran conflict. Recently...

Business Insider | Mar 06, 2026, 21:10
Data Centers Under Siege: The New Battlefield in the US-Iran Conflict
Cybersecurity
CISA Urges Immediate Action as New iOS Vulnerabilities Surface

The Cybersecurity and Infrastructure Security Agency (CISA) has issued a directive for federal agencies to address three...

Ars Technica | Mar 06, 2026, 19:45
CISA Urges Immediate Action as New iOS Vulnerabilities Surface
Science
Commercial Fishing Risks: Toxic Chemical Weapons Resurface Off New Jersey Coast

Since the 1970s, the waters off the Atlantic coast have been haunted by the remnants of World War I and II, with an esti...

Ars Technica | Mar 06, 2026, 22:05
Commercial Fishing Risks: Toxic Chemical Weapons Resurface Off New Jersey Coast
Science
Planet Labs Halts Satellite Imagery Amid Escalating Middle East Conflict

Planet Labs, a prominent player in the commercial satellite imaging sector, announced on Friday that it will temporarily...

Ars Technica | Mar 06, 2026, 22:50
Planet Labs Halts Satellite Imagery Amid Escalating Middle East Conflict
View All News