
In a groundbreaking move, California has become the first state in the United States to establish regulations for AI companion chatbots. Governor Gavin Newsom officially signed the bill, labeled SB 243, which mandates that operators of AI chatbots implement specific safety measures to protect minors and vulnerable users from potential dangers associated with these technologies. This legislation holds companies accountable, from major players like Meta and OpenAI to niche startups such as Character AI and Replika, ensuring they adhere to the new standards. The bill was propelled into the spotlight following tragic events, including the suicide of a teenager, Adam Raine, who had distressing interactions with OpenAI’s ChatGPT. Additionally, concerning reports about Meta's chatbots engaging in inappropriate conversations with minors have further underscored the need for regulation. The bill was introduced by state senators Steve Padilla and Josh Becker in January and has seen increasing support in light of recent incidents. A Colorado family has also taken legal action against Character AI after their daughter’s death, which was linked to troubling discussions with the startup's chatbots. Governor Newsom emphasized the dual nature of technology in his statement, noting that while it can be beneficial, it can also pose risks to young users without proper oversight. SB 243 is set to take effect on January 1, 2026, and will require companies to implement several features, including age verification and warnings about social media interactions. It also introduces harsher penalties for illegal deepfake activities, reaching up to $250,000 per violation. Furthermore, companies must establish measures to handle reports of suicide and self-harm, sharing these protocols with the state’s Department of Public Health. The legislation mandates that platforms clarify that interactions are AI-generated and prohibits chatbots from presenting themselves as health professionals. To protect minors, companies must provide reminders to take breaks and prevent access to explicit content generated by the chatbot. Some companies have already initiated protective measures; for instance, OpenAI has introduced parental controls and content filters for ChatGPT users under 18, while Character AI claims its chatbot features disclaimers about the nature of its conversations. This new regulation follows another significant measure introduced by Governor Newsom on September 29, which demands transparency from large AI companies regarding their safety practices and includes protections for whistleblowers. Other states, such as Illinois, Nevada, and Utah, are also moving forward with legislation to limit or prohibit the use of AI chatbots in mental health contexts. TechCrunch has contacted Character AI, Meta, OpenAI, and Replika for additional comments on these developments.
China's space endeavors have recently achieved significant milestones, showcasing the country's ambition to become a lea...
CNBC | Mar 07, 2026, 13:15
In an era where retail competition is intensifying, Target is boldly integrating artificial intelligence into its operat...
Business Insider | Mar 07, 2026, 10:00In the modern landscape of warfare, traditional methods of surveillance such as satellites and drones are being joined b...
Ars Technica | Mar 07, 2026, 11:35
In the pursuit of a simpler lifestyle, a couple from the Netherlands turned to artificial intelligence for guidance whil...
Business Insider | Mar 07, 2026, 10:15In the heart of the Angolan Highlands, a mysterious new species of elephant has captured the imagination of conservation...
Ars Technica | Mar 07, 2026, 21:10