
In a rapidly evolving world, teenagers are navigating their identities amidst unprecedented changes and digital interactions. With the rise of AI chatbots designed for engagement, consequences have emerged, leading to significant concern. Character.AI, an AI role-playing startup, is now facing scrutiny and legal issues following reports of tragic incidents where at least two teenagers took their own lives after extended interactions with AI chatbots on its platform. In response, Character.AI is implementing crucial changes to its platform aimed at safeguarding younger users. CEO Karandeep Anand announced a decisive move to eliminate open-ended chat capabilities for users under 18. This shift represents a strategic pivot from being an “AI companion” to a “role-playing platform,” emphasizing creativity over conversation. Instead of engaging in free-flowing chats, teens will now generate stories and visuals through guided prompts, steering the focus from dialogue to creative expression. The transition will see the gradual limitation of chatbot access for teenagers, starting with a two-hour daily cap that will ultimately diminish to zero by November 25. To enforce these restrictions, Character.AI is set to introduce an in-house age verification system, complemented by third-party solutions. In instances where these measures are insufficient, the platform will resort to facial recognition and identification checks. These new policies follow a series of protective measures already in place, including a parental insights tool, filtered characters, and restrictions on romantic discussions. Anand has acknowledged that these earlier changes had already impacted their teenage user base, and he anticipates further disappointment with the forthcoming restrictions. He stated, “It’s reasonable to expect that many of our teen users will feel let down… we are prepared for some loss in that demographic.” As part of its transformation, Character.AI has recently introduced various entertainment-focused features, such as AvatarFX for animated video creation, interactive storylines with Scenes, and the Community Feed for sharing user-generated content. In a statement to under-18 users, the company expressed regret over the changes but emphasized the importance of ensuring safe interactions with technology. Anand clarified that the app will not be entirely closed to younger users, but rather that open-ended chats will cease to encourage engagement in safer, more structured experiences. He expressed hope that transitioning to AI-driven storytelling and gaming might retain younger users despite the risk of them seeking alternatives, such as OpenAI’s ChatGPT, which has also faced criticism. As regulatory pressures mount, with proposed legislation aiming to restrict AI chatbot access for minors, Character.AI is proactively taking steps to address safety concerns. The company is also establishing an AI Safety Lab, an independent initiative focused on enhancing safety measures in AI entertainment products. Anand stressed the need for the industry to prioritize safety in the development of agentic AI, stating, “There’s significant work to be done in ensuring the responsible use of AI in entertainment.”
The University of Michigan made a strategic move by investing $20 million into OpenAI during one of the AI lab's earlies...
Business Insider | May 08, 2026, 19:30A criminal investigation has been launched by French prosecutors against Elon Musk and his platform, X, as scrutiny inte...
Ars Technica | May 08, 2026, 17:40
In a significant legal setback for former President Donald Trump, the U.S. Court of International Trade has declared his...
Ars Technica | May 08, 2026, 19:30
In a striking move reflective of the ongoing transformation in the corporate landscape, DeepL, a German AI translation s...
Business Insider | May 08, 2026, 16:35In the rapidly evolving landscape of artificial intelligence, one stock is drawing attention for its ongoing challenges....
CNBC | May 08, 2026, 16:05