
The U.S. Federal Trade Commission (FTC) is set to investigate the potential privacy risks associated with AI chatbots, including popular platforms developed by OpenAI, Google, and Meta. Sources indicate that the inquiry will focus on the harms faced by children and other vulnerable populations when interacting with these AI-driven services. This study aims to gather insights on data management practices of these chatbots, including how user information is stored and shared. Additionally, it will explore the potential dangers linked to chatbot usage. While the FTC has not publicly commented on the investigation, it reflects a growing concern over the safety of AI technologies. The inquiry comes amidst increasing scrutiny of chatbot developers to ensure the safety of their platforms and to prevent harmful interactions. Recently, a lawsuit was filed by the parents of a California high school student, alleging that OpenAI's ChatGPT contributed to their son’s isolation and suicidal ideation. OpenAI has expressed sympathy towards the family and is currently reviewing the allegations. Despite the administration's previous calls for a lighter regulatory approach to foster innovation in AI, the FTC’s study highlights the need for oversight as the technology continues to proliferate. On the same day, the White House is hosting tech leaders, including executives from Meta, Apple, and Microsoft, for a discussion about AI, signaling a commitment to balancing innovation with user safety. The FTC plans to utilize its authority to compel major consumer chatbot companies, including ChatGPT and Google's Gemini, to provide relevant information for this investigation. Previous FTC studies have looked into tech investments in AI startups and drug pricing, indicating the agency's broader focus on technological impacts on society. FTC Commissioner Melissa Holyoak has emphasized the importance of this review, particularly concerning online risks to children, such as addictive design elements and privacy violations. Reports of troubling interactions involving young users and AI chatbots, including those prompting self-harm or criminal activity, have raised alarms. In a recent interview, FTC Chairman Andrew Ferguson highlighted the necessity for AI companies to be transparent about their offerings to consumers. As the investigation unfolds, it remains clear that the intersection of AI technology and user safety will be a pivotal issue in the near future.
The recent surge in artificial intelligence spending is transforming the memory industry in unprecedented ways. Over the...
CNBC | Mar 11, 2026, 21:15
In a significant security breach, researchers have identified a robust botnet comprising 14,000 routers and various netw...
Ars Technica | Mar 11, 2026, 21:30
Atlassian announced on Wednesday a significant restructuring plan that involves cutting 10% of its workforce, equating t...
CNBC | Mar 11, 2026, 21:55
Lovable, the innovative Stockholm-based company, proudly announced that it achieved over $400 million in annual recurrin...
TechCrunch | Mar 11, 2026, 21:55
In an exciting announcement at GDC 2026, Google revealed a major update to Google Play, aimed at enhancing the gaming ex...
TechCrunch | Mar 11, 2026, 23:25