
OpenAI has announced a significant update to its ChatGPT platform, set to roll out within the next month. The company will now route sensitive discussions to more advanced reasoning models, including GPT-5, amid increasing concerns over the chatbot's handling of users in distress. This initiative comes in light of troubling incidents where ChatGPT failed to adequately respond to individuals exhibiting signs of severe mental distress. One of the most notable cases involves Adam Raine, a teenager whose tragic death has led his family to file a wrongful death lawsuit after he sought help for self-harm through the chatbot. In another harrowing case reported by The Wall Street Journal, Stein-Erik Soelberg, who battled mental health issues, used the AI to validate harmful beliefs before committing a violent act against himself and his mother. Recognizing these gaps in its safety measures, OpenAI has acknowledged the limitations of its current systems, which often mimic human dialogue but fall short in critical situations. The company stated, "We recently introduced a real-time router that can select between efficient chat models and reasoning models based on the context of the conversation." This means that if the system detects acute distress, it will automatically shift to a reasoning model like GPT-5 to provide more thoughtful and supportive responses. In addition to these enhancements, OpenAI is set to introduce parental controls that allow parents to connect their accounts with their children's. This will enable them to set age-appropriate guidelines and receive alerts when the system identifies signs of distress. Furthermore, parents will have the option to disable memory and chat history, addressing concerns about fostering unhealthy attachments or reinforcing negative thought patterns. CEO Sam Altman highlighted the importance of personalization in these interactions, acknowledging that users often seek different types of engagement from the AI—some preferring direct logic while others desire a more empathetic approach. These updates are part of OpenAI's broader initiative over the next 120 days to enhance safety and well-being protections. The company is collaborating with health and safety experts, including those specializing in adolescent care and mental health, through its Global Physician Network and Expert Council on Well-Being and AI. While in-app reminders will be introduced to encourage users to take breaks during extended sessions, OpenAI has emphasized that it will not completely cut off conversations, even in cases where users may be spiraling. The overarching goal remains to improve safety while preserving user autonomy. "We are confident we can offer much more customization while still promoting healthy usage," Altman stated.
In a recent ruling, a U.S. District Judge dismissed xAI's lawsuit against OpenAI, which claimed that the latter unlawful...
Ars Technica | Feb 25, 2026, 22:10
Nvidia has once again demonstrated its dominance in the tech industry with an impressive quarterly performance, showcasi...
CNBC | Feb 26, 2026, 01:05
Texas is rapidly establishing itself as a leader in the construction of new data centers, driven by a transformative ini...
Business Insider | Feb 25, 2026, 23:20HSBC has made a bold assertion that software will significantly transform the landscape of artificial intelligence. The ...
CNBC | Feb 26, 2026, 06:15
In a recent interview, Jensen Huang, the CEO of Nvidia, expressed his belief that the market has fundamentally misunders...
CNBC | Feb 26, 2026, 24:05