Think OpenAI's ChatGPT chats are private? Police could be reading them

Think OpenAI's ChatGPT chats are private? Police could be reading them

OpenAI has disclosed that dialogues on ChatGPT reflecting a significant risk of physical harm to others might undergo scrutiny by human moderators, and in severe instances, could be reported to law enforcement. This information was shared in a recent blog post, which details how the AI manages sensitive discussions and addresses safety concerns. The company emphasizes that while ChatGPT aims to offer compassionate support to users in distress, it has protocols in place to differentiate between self-destructive behavior and threats directed at others. For individuals expressing suicidal thoughts, the AI provides resources such as the 988 hotline in the U.S. and the Samaritans in the U.K., without escalating these situations to the authorities, thereby safeguarding user privacy. In contrast, if a user signals an intention to harm someone else, their conversation is sent through a specialized review process. Human moderators, who are trained on the company's usage policies, analyze these exchanges. Should they recognize an immediate threat, OpenAI may notify the appropriate authorities, and accounts associated with such threats could face bans. The company acknowledges that its safety protocols work best in brief chats. In lengthy or recurring conversations, the effectiveness of these safeguards may diminish, possibly leading to responses that contradict safety guidelines. To address this, OpenAI is enhancing its protections to ensure consistent safety across multiple interactions and to prevent vulnerabilities that might elevate risk. Beyond managing direct threats, OpenAI is also exploring proactive measures for other risky behaviors, such as extreme sleep deprivation or dangerous stunts. The aim is to ground users in reality and direct them toward professional assistance. Furthermore, the company is working on implementing parental controls for teenage users and investigating ways to connect individuals to trusted contacts or licensed therapists before crises escalate. OpenAI's blog post serves as a crucial reminder that conversations on ChatGPT are not entirely confidential in certain situations. Users should remain aware that if their discussions suggest potential danger to others, they may be subject to review by trained moderators, potentially leading to real-world interventions, including police action.

Sources : Mint

Published On : Sep 02, 2025, 05:45

AI
Trump's Ambitious AI Data Center Plans Stalled by Trade Policies

Donald Trump is encountering major setbacks in his quest to rapidly expand AI data centers across the United States, a k...

Ars Technica | Apr 03, 2026, 20:50
Trump's Ambitious AI Data Center Plans Stalled by Trade Policies
Automotive
Tesla's Austin Factory Sees Workforce Cut Amid Declining Sales

Tesla's manufacturing hub near Austin, Texas, has experienced a significant reduction in its workforce, with numbers plu...

TechCrunch | Apr 03, 2026, 21:00
Tesla's Austin Factory Sees Workforce Cut Amid Declining Sales
AI
The Alarming Rise of Cognitive Surrender: Are We Trusting AI Too Much?

Recent findings reveal a troubling trend among users of large language models (LLMs): a significant portion appears will...

Ars Technica | Apr 03, 2026, 21:10
The Alarming Rise of Cognitive Surrender: Are We Trusting AI Too Much?
AI
Mass Exodus at xAI: Musk Faces Leadership Crisis Ahead of IPO

In a dramatic turn of events, xAI, the artificial intelligence venture co-founded by Elon Musk, has seen a swift departu...

Business Insider | Apr 04, 2026, 09:35
Mass Exodus at xAI: Musk Faces Leadership Crisis Ahead of IPO
Startups
Anthropic Expands Its Horizons with $400 Million Acquisition of Coefficient Bio

In a strategic move to bolster its presence in the healthcare sector, Anthropic has acquired the biotech startup Coeffic...

TechCrunch | Apr 03, 2026, 21:00
Anthropic Expands Its Horizons with $400 Million Acquisition of Coefficient Bio
View All News