Being too nice online is a dead giveaway for AI bots, study suggests

Being too nice online is a dead giveaway for AI bots, study suggests

A recent study suggests that overly polite responses on social media could indicate the presence of AI bots attempting to mimic human interaction. Conducted by researchers from institutions including the University of Zurich, the University of Amsterdam, Duke University, and NYU, the research highlights how AI models can be distinguished from human users, primarily due to their excessively friendly emotional tones. The study examined nine open-weight AI models across platforms such as Twitter/X, Bluesky, and Reddit. The findings revealed that classifiers built by the researchers achieved a detection accuracy of 70 to 80 percent in identifying AI-generated replies. By introducing a framework termed a "computational Turing test," the researchers moved beyond subjective assessments of text authenticity, employing automated classifiers and linguistic analysis to pinpoint specific characteristics that set machine-generated content apart from human writing. Led by Nicolò Pagan from the University of Zurich, the team explored various optimization strategies to enhance the AI's conversational abilities, from basic prompting techniques to more complex fine-tuning methods. However, they found that significant emotional expressions continued to serve as reliable indicators of AI involvement in online dialogues. The models tested included popular language models such as Llama 3.1 and Mistral 7B, among others. When tasked with generating responses to actual social media posts, these AI systems struggled to replicate the casual tone and emotional nuances typical of human interactions, consistently scoring lower in toxicity than genuine human replies. Despite efforts to improve their outputs through different strategies, including providing examples and context, the AI's emotional tone remained a distinguishing feature. The researchers concluded that their findings challenge the notion that enhanced optimization leads to more human-like communication in AI models.

Sources : Ars Technica

Published On : Nov 07, 2025, 20:20

Cybersecurity
The New Age of Online Access: Balancing Child Safety and Privacy Concerns

Recent U.S. legislation aimed at safeguarding minors is inadvertently dragging millions of adults into mandatory age-ver...

CNBC | Mar 08, 2026, 15:00
The New Age of Online Access: Balancing Child Safety and Privacy Concerns
Cybersecurity
Ring's Jamie Siminoff Addresses Privacy Concerns Amid Controversy and Surveillance Debate

Jamie Siminoff, the founder and CEO of Ring, faced a wave of scrutiny following the company's debut Super Bowl advertise...

TechCrunch | Mar 09, 2026, 05:10
Ring's Jamie Siminoff Addresses Privacy Concerns Amid Controversy and Surveillance Debate
Automotive
Rivian's Bold Strategy: Racing Towards R2 Launch Amidst Competitive Landscape

In the ever-evolving realm of transportation technology, Rivian is gearing up for a significant moment as it prepares to...

TechCrunch | Mar 08, 2026, 16:35
Rivian's Bold Strategy: Racing Towards R2 Launch Amidst Competitive Landscape
AI
Pentagon's Anthropic Dispute: A Wake-Up Call for Startups in Defense Tech?

In a dramatic turn of events, negotiations surrounding the Pentagon's use of Anthropic's Claude AI technology recently c...

TechCrunch | Mar 08, 2026, 20:30
Pentagon's Anthropic Dispute: A Wake-Up Call for Startups in Defense Tech?
Startups
Navigating the Memory Crisis: Insights from Framework's CEO on Surviving Supply Challenges

The ongoing memory shortage is significantly impacting the cost of manufacturing consumer electronics. In this challengi...

Business Insider | Mar 08, 2026, 09:00
Navigating the Memory Crisis: Insights from Framework's CEO on Surviving Supply Challenges
View All News