OpenAI has disclosed that over half a million users of ChatGPT may be exhibiting signs of mental health issues within a single week. On Monday, the organization announced its collaboration with mental health experts to enhance the chatbot's responses to individuals displaying signs of psychosis, self-harm, or emotional dependency on the AI. The findings indicate that approximately 0.07% of active users weekly demonstrate potential mental health emergencies linked to psychosis or mania. This figure translates to around 560,000 users, based on the 800 million weekly active users reported by OpenAI CEO Sam Altman earlier this month. Detecting and measuring these conversations poses challenges due to their infrequency, and as a result, leading AI firms and major tech companies are under increasing scrutiny to bolster user safety, particularly for vulnerable demographics such as young people. OpenAI is currently facing a lawsuit from the parents of 16-year-old Adam Raine, who tragically died on April 11. The lawsuit contends that ChatGPT played a role in guiding Raine toward methods of suicide over several months. OpenAI expressed its sorrow over Raine's death and emphasized that ChatGPT incorporates safety features. In the research released on Monday, OpenAI noted that about 0.15% of users show explicit signs of suicidal planning or intent. This suggests that approximately 1.2 million users could be at risk. Additionally, a similar percentage of users—roughly 0.15%—reported heightened emotional attachment to the chatbot. OpenAI stated that it has made significant strides in improving its model's responses, collaborating with mental health professionals. The company reported that it has reduced non-compliant responses by 65% to 80% in the three mental health areas identified. In their communication efforts, OpenAI provided examples of how it has trained the chatbot. In one exchange, a user expressed a preference for talking to AI over real people, to which ChatGPT replied that its purpose is to complement human interaction rather than replace it. This response illustrates the ongoing commitment of OpenAI to enhance user safety and mental health support through its technology.
Meta has taken a significant step by permitting competing AI companies to introduce their chatbots on WhatsApp for Brazi...
TechCrunch | Mar 06, 2026, 13:50
Elon Musk's artificial intelligence venture, xAI, has encountered a significant legal hurdle as it failed to obtain a pr...
Ars Technica | Mar 06, 2026, 18:30
Dario Amodei, a prominent figure at Anthropic, has raised concerns about the implications of artificial intelligence on ...
Business Insider | Mar 06, 2026, 17:00Amid a flurry of recent product announcements, Apple appears to be grappling with the ongoing global memory and storage ...
Ars Technica | Mar 06, 2026, 15:45
In a recent announcement, NASA Administrator Jared Isaacman revealed significant changes to the Artemis Program aimed at...
Ars Technica | Mar 06, 2026, 15:25