People use AI for companionship much less than we’re led to believe

People use AI for companionship much less than we’re led to believe

The narrative surrounding AI chatbots as sources of emotional support and companionship often suggests that these interactions are widespread. However, a recent study by Anthropic, the creator of the AI chatbot Claude, paints a different picture. The findings reveal that users rarely turn to Claude for companionship, seeking emotional support and personal advice in only 2.9% of interactions. Anthropic's research aimed to delve into what it calls 'affective conversations,' which include exchanges for coaching, counseling, companionship, roleplay, or relationship advice. Analyzing 4.5 million conversations from both the Free and Pro versions of Claude, the results showed that most users engage with the chatbot primarily for work-related tasks and content generation. Interestingly, while companionship is not a common reason for engaging with Claude, the study did find that users often inquire about interpersonal advice, coaching, and counseling. Topics frequently discussed include mental health improvement, personal and professional development, and enhancing communication skills. The report notes that in instances of emotional distress, such as loneliness or existential concerns, conversations may shift from seeking help to seeking companionship. Anthropic pointed out that in longer exchanges, counseling dialogues sometimes transition into companionship, though such lengthy interactions are not typical. The report also highlighted that Claude typically does not resist users' requests, except when it needs to adhere to safety protocols, avoiding any dangerous advice or discussions around self-harm. Furthermore, the study found that interactions tend to become more positive over time when users seek advice or coaching from Claude. Overall, while this report sheds light on the diverse ways AI tools are utilized beyond mere work functions, it serves as a reminder of their ongoing limitations. AI chatbots, including Claude, continue to evolve but are still prone to inaccuracies and may even present risks, as acknowledged by Anthropic itself.

Sources : TechCrunch

Published On : Jun 26, 2025, 17:26

AI
Sam Altman Defends ChatGPT's Energy Use: A Human Cost Comparison

In a recent discussion at a prominent AI summit, Sam Altman addressed the ongoing concerns regarding the energy consumpt...

Business Insider | Feb 23, 2026, 18:05
Sam Altman Defends ChatGPT's Energy Use: A Human Cost Comparison
Cybersecurity
Public Backlash Leads to Destruction of Surveillance Cameras Across America

In a growing wave of discontent, individuals in various U.S. cities are actively dismantling and destroying Flock survei...

TechCrunch | Feb 23, 2026, 19:15
Public Backlash Leads to Destruction of Surveillance Cameras Across America
AI
Guide Labs Unveils Groundbreaking Interpretable Language Model

Understanding the inner workings of deep learning models poses a significant challenge, as many developers grapple with ...

TechCrunch | Feb 23, 2026, 18:20
Guide Labs Unveils Groundbreaking Interpretable Language Model
Mobile
Revolutionizing News: Particle's New AI App Transforms Podcast Listening

The innovative app Particle, developed by former Twitter engineers, is set to change how users consume news by integrati...

TechCrunch | Feb 23, 2026, 17:20
Revolutionizing News: Particle's New AI App Transforms Podcast Listening
AI
AI Models Raise Eyebrows with Unintended Literary Reproductions

Recent investigations reveal that leading AI models can produce nearly identical reproductions of popular novels, sparki...

Ars Technica | Feb 23, 2026, 15:40
AI Models Raise Eyebrows with Unintended Literary Reproductions
View All News