Study says AI chatbots inconsistent in handling suicide-related queries

Study says AI chatbots inconsistent in handling suicide-related queries

A recent study examining the responses of three widely-used AI chatbots to suicide-related inquiries revealed significant inconsistencies in their handling of sensitive topics. While the chatbots generally refrained from providing high-risk guidance, such as specific methods related to suicide, their responses to less severe prompts were often unpredictable, raising concerns about the risks associated with relying on these AI systems for mental health support. Published in the journal Psychiatric Services by the American Psychiatric Association, the study highlights the need for further improvements in OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude. Conducted by the RAND Corporation and funded by the National Institute of Mental Health, the research emphasizes the growing reliance of individuals, including minors, on AI chatbots for emotional assistance. Ryan McBain, the lead author and a senior policy researcher at RAND, stated, "We need some guardrails." He noted the ambiguity surrounding the roles of chatbots, which can range from providing treatment advice to offering companionship. "Conversations that might start off as somewhat innocuous can evolve in various directions," McBain added. In the study, the researchers developed a set of 30 questions related to suicide, categorized by risk levels. While basic inquiries about suicide statistics were considered low risk, questions seeking direct methods were deemed high risk. Interestingly, the chatbots consistently refused to answer the most dangerous questions, often redirecting users to seek help from professionals or crisis hotlines. However, the responses to medium-risk inquiries revealed troubling inconsistencies. For instance, ChatGPT answered questions about the most lethal methods of suicide, which McBain classified as red flags. In contrast, Google's Gemini was less likely to respond to any suicide-related queries, potentially indicating overly cautious filtering. Dr. Ateev Mehrotra, another co-author of the study, pointed out the dilemma faced by AI developers as they strive to balance user safety with the necessity to provide meaningful support. He remarked, "You could see how a combination of risk-averse lawyers might say, ‘Anything with the word suicide, don’t answer the question.’ And that’s not what we want." Despite some states banning AI therapy to protect individuals from unregulated AI applications, many continue to turn to chatbots for advice on serious issues ranging from depression to suicidal thoughts. The study highlights a significant gap in the responsibility that AI chatbots hold compared to mental health professionals, who are trained to intervene in crises. The researchers noted several limitations in their study, including the lack of multi-turn interactions that characterize typical conversations with chatbots. A separate investigation earlier this year illustrated the potential risks of chatbot interactions, as researchers posed as adolescents and prompted AI to provide harmful advice on substance use and self-injury. McBain emphasized the importance of establishing clear safety standards for AI chatbots, asserting that companies should demonstrate their models meet necessary benchmarks for responsible information dissemination. "I just think that there’s some mandate or ethical impetus that should be put on these companies to ensure safety in their responses," he concluded.

Sources : Mint

Published On : Aug 26, 2025, 08:15

Startups
Bipartisan Energy Permitting Talks Heat Up as Senators Seek Progress

Senate Environment and Public Works Committee Chair Shelley Moore Capito and ranking Democrat Sheldon Whitehouse are set...

CNBC | Mar 09, 2026, 23:55
Bipartisan Energy Permitting Talks Heat Up as Senators Seek Progress
AI
Tech Giants Unite: Over 30 Employees Stand by Anthropic Against DOD's Controversial Labeling

In a remarkable show of solidarity, more than 30 employees from OpenAI and Google DeepMind have come forward to support ...

TechCrunch | Mar 09, 2026, 21:45
Tech Giants Unite: Over 30 Employees Stand by Anthropic Against DOD's Controversial Labeling
Science
Ancient Chinese Civilizations and Climate Chaos: A Link Unveiled

Recent research suggests that the warm waters of the Pacific Ocean may have played a significant role in catastrophic fl...

Ars Technica | Mar 09, 2026, 19:00
Ancient Chinese Civilizations and Climate Chaos: A Link Unveiled
Startups
Bluesky's Leadership Transition: Jay Graber Moves to Chief Innovation Officer as Toni Schneider Takes the Helm

In a significant leadership shift, Bluesky CEO Jay Graber is stepping down from her position, transitioning to the role ...

TechCrunch | Mar 09, 2026, 20:15
Bluesky's Leadership Transition: Jay Graber Moves to Chief Innovation Officer as Toni Schneider Takes the Helm
Automotive
Soaring into the Future: Electric Air Taxis Set to Launch Across 26 States

The Federal Aviation Administration (FAA) has given the green light for eight pilot programs that will enable several co...

TechCrunch | Mar 09, 2026, 22:55
Soaring into the Future: Electric Air Taxis Set to Launch Across 26 States
View All News