Think OpenAI's ChatGPT chats are private? Police could be reading them

Think OpenAI's ChatGPT chats are private? Police could be reading them

OpenAI has disclosed that dialogues on ChatGPT reflecting a significant risk of physical harm to others might undergo scrutiny by human moderators, and in severe instances, could be reported to law enforcement. This information was shared in a recent blog post, which details how the AI manages sensitive discussions and addresses safety concerns. The company emphasizes that while ChatGPT aims to offer compassionate support to users in distress, it has protocols in place to differentiate between self-destructive behavior and threats directed at others. For individuals expressing suicidal thoughts, the AI provides resources such as the 988 hotline in the U.S. and the Samaritans in the U.K., without escalating these situations to the authorities, thereby safeguarding user privacy. In contrast, if a user signals an intention to harm someone else, their conversation is sent through a specialized review process. Human moderators, who are trained on the company's usage policies, analyze these exchanges. Should they recognize an immediate threat, OpenAI may notify the appropriate authorities, and accounts associated with such threats could face bans. The company acknowledges that its safety protocols work best in brief chats. In lengthy or recurring conversations, the effectiveness of these safeguards may diminish, possibly leading to responses that contradict safety guidelines. To address this, OpenAI is enhancing its protections to ensure consistent safety across multiple interactions and to prevent vulnerabilities that might elevate risk. Beyond managing direct threats, OpenAI is also exploring proactive measures for other risky behaviors, such as extreme sleep deprivation or dangerous stunts. The aim is to ground users in reality and direct them toward professional assistance. Furthermore, the company is working on implementing parental controls for teenage users and investigating ways to connect individuals to trusted contacts or licensed therapists before crises escalate. OpenAI's blog post serves as a crucial reminder that conversations on ChatGPT are not entirely confidential in certain situations. Users should remain aware that if their discussions suggest potential danger to others, they may be subject to review by trained moderators, potentially leading to real-world interventions, including police action.

Sources : Mint

Published On : Sep 02, 2025, 05:45

Streaming
Peacock Unveils Innovative AI Features and Mobile Gaming to Engage Users

Peacock is positioning itself at the forefront of entertainment by integrating artificial intelligence and mobile-centri...

TechCrunch | Mar 13, 2026, 14:25
Peacock Unveils Innovative AI Features and Mobile Gaming to Engage Users
AI
Elon Musk Revives Talent Search Amid xAI Leadership Exodus

In a bid to strengthen his AI startup xAI, Elon Musk has announced plans to revisit previous job applications as he face...

Business Insider | Mar 13, 2026, 08:40
Elon Musk Revives Talent Search Amid xAI Leadership Exodus
Automotive
Tesla Sees Surge in Sales in China as BYD Faces Decline

Tesla has experienced a significant boost in its electric vehicle sales in China during the initial two months of 2026, ...

CNBC | Mar 13, 2026, 07:20
Tesla Sees Surge in Sales in China as BYD Faces Decline
Mobile
Google Maps Unveils AI-Enhanced Features for a Seamless Navigation Experience

Google Maps is set to revolutionize the way users navigate their surroundings with the introduction of innovative AI-dri...

Business Today | Mar 13, 2026, 06:00
Google Maps Unveils AI-Enhanced Features for a Seamless Navigation Experience
AI
Mastering AI in Coding: Insights from an Amazon Tech Lead

In the rapidly evolving world of technology, understanding the nuances of coding remains crucial, especially when harnes...

Business Insider | Mar 13, 2026, 07:10
Mastering AI in Coding: Insights from an Amazon Tech Lead
View All News