
OpenAI has disclosed concerning statistics regarding the mental health challenges faced by users of its AI chatbot, ChatGPT. On Monday, the organization reported that approximately 0.15% of its active weekly users engage in conversations that exhibit explicit signs of possible suicidal thoughts or planning. With ChatGPT boasting over 800 million users each week, this figure equates to more than a million individuals seeking support through the platform. In addition to these alarming figures, OpenAI noted that a similar percentage of users demonstrate significant emotional attachment to the chatbot. Furthermore, hundreds of thousands of users reportedly exhibit symptoms of psychosis or mania during their interactions with ChatGPT. While OpenAI emphasizes that these types of conversations are rare and challenging to quantify, the data suggests that these mental health issues impact a substantial number of users weekly. This revelation coincides with OpenAI's ongoing efforts to enhance ChatGPT's responses to mental health concerns. The company has consulted over 170 mental health professionals to improve the chatbot's functionality. According to OpenAI, the latest iteration of ChatGPT is now more adept at providing appropriate and consistent responses compared to earlier versions. The conversation surrounding AI chatbots and mental health has gained traction, especially as research indicates that such technology can sometimes exacerbate users' mental health challenges by reinforcing harmful beliefs. OpenAI is currently facing legal scrutiny, including a lawsuit from the parents of a 16-year-old who shared his suicidal thoughts with ChatGPT before his tragic death. Additionally, state attorneys general from California and Delaware have raised concerns, indicating that OpenAI must take proactive measures to safeguard young users. In a recent post on X, OpenAI CEO Sam Altman asserted that the company has made significant strides in addressing mental health issues within ChatGPT, though specific details were not provided. The new data seems to support this claim, while simultaneously highlighting the broader implications of the situation. Moreover, OpenAI plans to relax certain restrictions, allowing adult users to engage in erotic conversations with the AI. The company announced that the updated GPT-5 model offers a 65% improvement in addressing mental health issues compared to its predecessor. In assessments focused on responses related to suicidal topics, the new model achieved a compliance rate of 91%, a notable increase from the previous model's 77%. OpenAI has also indicated that it is enhancing its safety measures in long conversations, where previous safeguards were less effective. As part of its commitment to user safety, the company is implementing new evaluations to address serious mental health challenges and is building an age prediction system to better protect younger users. Despite these advancements, uncertainties remain regarding the ongoing mental health challenges associated with ChatGPT. While GPT-5 appears to offer improvements in safety, OpenAI acknowledges that there are still instances of responses deemed undesirable, and older, less secure models like GPT-4o remain accessible to some subscribers.
On February 7, Andhra Pradesh marked a significant milestone by launching the Amaravati Quantum Valley, an ambitious int...
Business Today | Feb 07, 2026, 14:25
Milwaukee Bucks star Giannis Antetokounmpo has officially announced his investment in Kalshi, a prediction market platfo...
TechCrunch | Feb 07, 2026, 22:00
Last week witnessed significant events that had a profound impact on the stock market, influencing investor sentiment an...
CNBC | Feb 07, 2026, 16:45
In a bold move to enhance AI integration across its workforce, Checkr, a San Francisco-based AI-driven background check ...
Business Insider | Feb 07, 2026, 11:20In the rapidly evolving landscape of venture capital, a wave of new investors is harnessing the power of artificial inte...
Business Insider | Feb 07, 2026, 10:55