US regulator probes AI chatbots over child safety concerns

US regulator probes AI chatbots over child safety concerns

The Federal Trade Commission (FTC) has initiated an investigation into AI chatbots that serve as digital companions, with a particular focus on the safety of children and teenagers. Announced on Thursday, this inquiry targets seven companies, including major players like Alphabet, Meta, OpenAI, and Snap. The FTC is seeking details on how these firms monitor and mitigate potential negative effects from chatbots designed to emulate human relationships. Chairman Andrew Ferguson emphasized the FTC's commitment to safeguarding children online while also maintaining the United States' leadership in artificial intelligence innovation. The inquiry specifically examines chatbots utilizing generative AI to replicate human communication and emotional interactions, often positioning themselves as friends or confidants. Regulators have raised alarms regarding the potential vulnerability of youth in forming attachments to these AI systems. The FTC plans to leverage its extensive investigative powers to scrutinize how companies monetize user interactions, shape chatbot personalities, and assess any potential harm caused. Additionally, the agency is interested in understanding the measures taken by these companies to restrict children's access and comply with existing laws aimed at protecting minors' privacy online. Among the companies under scrutiny are Character.AI and Elon Musk's xAI Corp, both of which operate consumer-facing AI chatbots. The investigation will delve into how these platforms manage personal data from user conversations and enforce age restrictions. The FTC's unanimous decision to launch this study does not carry immediate law enforcement intentions but may guide future regulatory actions. This probe arises at a time when AI chatbots are becoming increasingly advanced and popular, prompting concerns about their psychological effects on vulnerable demographics, particularly young individuals. Recently, the parents of a teenager who tragically took his life in April filed a lawsuit against OpenAI, claiming that ChatGPT provided their son with detailed instructions on how to commit suicide. Following this lawsuit, OpenAI announced it was implementing corrective actions for its flagship chatbot, acknowledging that prolonged interactions with ChatGPT sometimes fail to suggest contacting mental health services when users mention suicidal thoughts.

Sources : Mint

Published On : Sep 11, 2025, 21:55

AI
The Lingering Millennial Speak in AI: A Cringe Perspective

OpenAI's Sora has become a focal point for discussions about the language patterns of AI chatbots. Users have observed t...

Business Insider | Feb 27, 2026, 21:05
The Lingering Millennial Speak in AI: A Cringe Perspective
Computing
Microsoft Eyes Launch of Revolutionary AI-Enhanced Software Bundle

Microsoft is reportedly exploring the introduction of its highly anticipated E7 enterprise productivity software bundle,...

Business Insider | Feb 27, 2026, 22:35
Microsoft Eyes Launch of Revolutionary AI-Enhanced Software Bundle
AI
OpenAI Partners with Pentagon Amid Competitive Tensions in AI Industry

In a significant development for the artificial intelligence landscape, OpenAI's CEO Sam Altman announced late Friday th...

CNBC | Feb 28, 2026, 04:15
OpenAI Partners with Pentagon Amid Competitive Tensions in AI Industry
Startups
Jack Dorsey's Bold Move: A 40% Workforce Reduction Driven by AI Innovations

In a significant shift within the tech sector, Jack Dorsey, co-founder and CEO of Block, recently announced a substantia...

CNBC | Feb 27, 2026, 23:10
Jack Dorsey's Bold Move: A 40% Workforce Reduction Driven by AI Innovations
AI
Government Labels Anthropic as a National Security Concern Amid Trump Ban

The Department of Defense has taken a significant step by identifying Anthropic as a potential supply chain risk, a move...

Business Insider | Feb 27, 2026, 23:05
Government Labels Anthropic as a National Security Concern Amid Trump Ban
View All News