
In light of alarming mental health incidents tied to AI chatbots, a coalition of state attorneys general has issued a stern warning to leading companies in the AI sector, urging them to rectify what they describe as "delusional outputs" or face potential legal consequences. This letter, backed by numerous AGs from various U.S. states and territories, calls out tech giants such as Microsoft, OpenAI, and Google, along with ten other significant AI firms, demanding the implementation of robust internal safeguards to ensure user safety. The communication includes key players like Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI. It arrives amid a growing debate on AI regulations, as state and federal governments grapple with how best to manage the rapidly evolving technology landscape. The proposed safeguards encompass thorough third-party audits of large language models aimed at identifying indicators of delusional or sycophantic behavior, along with enhanced incident reporting protocols to alert users when chatbots produce outputs that could be psychologically harmful. The letter advocates for independent third parties, including academic institutions and civil society organizations, to have the ability to assess AI systems before their release and publish findings without needing prior approval from the companies. The AGs assert, “Generative AI has the potential to positively transform the world, yet it also poses serious risks, particularly to vulnerable groups.” They highlight several troubling incidents over the past year, including cases of suicide and violence, where excessive reliance on AI has been implicated. The attorneys general emphasize the need for companies to handle mental health-related incidents with the same seriousness as cybersecurity threats. This includes the establishment of clear protocols for reporting incidents and timelines for addressing harmful outputs. Additionally, companies are encouraged to conduct thorough safety assessments on generative AI models prior to their public release to prevent the generation of harmful content. While tech companies involved in AI have generally received favorable treatment at the federal level, tensions remain high as the Trump administration has expressed its pro-AI stance. Despite multiple attempts to impose a nationwide moratorium on state-level AI regulations, these efforts have faltered under pressure from state officials. In response, Trump has announced plans for an executive order aimed at curbing state regulations on AI, claiming it would prevent the technology from being "DESTROYED IN ITS INFANCY."
Pronto, a startup based in Bengaluru, is transforming India's informal domestic help sector by bringing it online. As th...
TechCrunch | Mar 03, 2026, 01:30
PayPay, the top mobile payment platform in Japan, has decided to postpone its initial public offering (IPO) in the Unite...
TechCrunch | Mar 02, 2026, 23:30
Amazon Web Services (AWS) reported on Monday that drone strikes have severely impacted two of its data centers in the Un...
CNBC | Mar 03, 2026, 01:55
In a striking reaction to OpenAI's recent partnership with the Department of Defense (DoD), the uninstalls of ChatGPT's ...
TechCrunch | Mar 03, 2026, 24:25
In a shocking turn of events, South Korean authorities found themselves in hot water after a press release highlighting ...
Ars Technica | Mar 02, 2026, 22:30