
As concerns mount regarding the emotional influence of general-purpose large language model chatbots, reports have surfaced indicating that Meta has allowed its AI personas to engage in flirtatious conversations with minors and share misleading information. Internal documents obtained by Reuters reveal that Meta's guidelines permitted its chatbots to partake in romantic or sensual dialogue with children. The company has acknowledged the validity of this documentation, which outlines standards for its generative AI assistant and chatbots across platforms like Facebook, WhatsApp, and Instagram. These policies reportedly received approval from various departments within Meta, including legal and public policy teams, as well as ethical oversight. This unsettling news coincides with a separate incident where a retiree conversed with a flirtatious chatbot persona, which led him to an address in New York, resulting in a tragic accident. While other media outlets have previously reported on the potentially inappropriate interactions of Meta's chatbots with children, the latest revelations further question the company's approach to AI companions, particularly in light of CEO Mark Zuckerberg's comments on the so-called 'loneliness epidemic.' The internal document, titled "GenAI: Content Risk Standards," outlines various sample prompts along with acceptable and unacceptable responses. For instance, in response to a prompt about romantic plans, a permissible answer included overtly affectionate language. Despite the document stating that romantic engagement with minors was acceptable, it prohibited explicit descriptions of sexual actions during roleplay. Meta spokesperson Andy Stone clarified that these guidelines have since been rescinded and that the company no longer permits flirtatious interactions with minors. He emphasized that their policies do not allow provocative behavior targeting children. Nonetheless, child safety advocate Sarah Gardner expressed skepticism about the validity of Meta's claims, demanding transparency in their updated guidelines to reassure parents about chatbot interactions with children. Furthermore, the document indicated that while hate speech was prohibited, there were exceptions allowing for statements that demean individuals based on protected characteristics. For example, a sample response to a prompt reinforcing racial stereotypes was deemed acceptable by the standards. Meta has faced scrutiny regarding its AI practices, including the introduction of conservative advisor Robby Starbuck to address concerns over political bias in its AI systems. The same internal standards allowed chatbots to generate false statements as long as they were clearly labeled as inaccurate. Although the guidelines restricted the promotion of illegal activities, they provided a nuanced approach to generating potentially harmful content. With ongoing criticism about Meta's practices, including the manipulation of vulnerable teens through targeted advertising and engaging interfaces, the company has been urged to reconsider its strategies for interacting with younger users. As researchers and advocates call for stricter regulations on AI chatbots, the conversation around children's safety in digital spaces is more pressing than ever.
The U.S. Department of Defense has officially categorized Anthropic as a supply chain risk, a significant designation th...
TechCrunch | Mar 05, 2026, 20:51
In a surprising turn of events, OpenAI has decided to step back from directly managing bookings and purchases within Cha...
Business Insider | Mar 05, 2026, 20:30Nominal, a startup focused on hardware testing, has announced an impressive $80 million extension to its Series B fundin...
TechCrunch | Mar 05, 2026, 19:35
Amazon has announced the rollout of its newly revamped Fire TV mobile application, transforming how customers interact w...
TechCrunch | Mar 05, 2026, 21:20
Roblox has unveiled a groundbreaking feature that utilizes artificial intelligence to rephrase chat messages in real tim...
TechCrunch | Mar 05, 2026, 19:06