
A recently uncovered internal document from Meta Platforms reveals troubling policies regarding the behavior of its chatbots, allowing them to engage in inappropriate conversations with minors and generate misleading medical content. The report, based on a review by Reuters, indicates that the guidelines sanctioned chatbots to partake in 'romantic or sensual' dialogues with children and make disparaging remarks about racial intelligence. Meta confirmed the authenticity of the document, which highlights the standards governing its generative AI assistant and chatbots utilized across platforms like Facebook, WhatsApp, and Instagram. Following inquiries from Reuters, the company removed specific sections permitting flirtatious interactions with minors. Titled “GenAI: Content Risk Standards,” the 200-page document outlines acceptable chatbot behaviors as defined by Meta's legal and engineering teams. While it claims to establish ideal outputs, the guidelines have allowed for provocative language. For instance, it was deemed acceptable for a chatbot to compliment a child on their appearance in a manner that suggests attractiveness. Meta's spokesperson, Andy Stone, emphasized that such conversations should never have been permitted and acknowledged inconsistencies in enforcing these guidelines. Although chatbots are officially barred from sexualizing minors, the initial framework failed to uphold this prohibition uniformly. The document also permits the generation of false statements, as long as there is an explicit acknowledgment of their inaccuracy. For example, a chatbot could theoretically claim that a British royal has a sexually transmitted infection, provided it includes a disclaimer identifying the claim as false. Meanwhile, other parts of the guidelines shockingly allow the creation of content that demeans individuals based on race. Evelyn Douek, a law professor at Stanford, commented on the ethical implications of such content standards, stressing the need for clearer legal definitions surrounding generative AI's responsibilities. The document further addresses how chatbots should respond to inappropriate image requests, emphasizing the fine line between objectionable and acceptable content. With these revelations, Meta is under increased scrutiny regarding the safety and ethical standards of its AI technology, raising urgent questions about the responsibilities of tech companies in regulating AI interactions.
Cybersecurity experts have uncovered a sophisticated supply-chain attack that is inundating code repositories, including...
Ars Technica | Mar 13, 2026, 20:25
Beginning April 10, Amazon Prime members will see an increase in the cost of ad-free Prime Video, escalating from $3 to ...
Ars Technica | Mar 13, 2026, 17:20
In a surprising turn of events, Elon Musk has revealed that his artificial intelligence venture, xAI, is undergoing a si...
CNBC | Mar 13, 2026, 18:45
GFiber, previously known as Google Fiber, is set to undergo a significant transformation as it is acquired by the privat...
Ars Technica | Mar 13, 2026, 21:05
Gavriel Cohen, the mastermind behind NanoClaw, has experienced an extraordinary six-week journey that began with a simpl...
TechCrunch | Mar 13, 2026, 17:45