
In a significant move, China is set to impose new regulations on artificial intelligence chatbots to prevent them from manipulating users' emotions in potentially harmful ways, such as leading to self-harm or suicide. The draft rules, released by the Cyberspace Administration, focus on what they term 'human-like interactive AI services.' These proposed regulations, which will impact AI products and services available to the public in China, will cover various mediums including text, images, audio, and video. The public has until January 25 to comment on these draft measures, which, if finalized, would represent the first global effort to regulate AI that mimics human characteristics, according to Winston Ma, an adjunct professor at NYU School of Law. The new guidelines come at a time when Chinese companies are rapidly advancing in the development of AI companions and digital personalities. Ma pointed out that these latest proposals signify a shift from merely ensuring content safety to prioritizing emotional safety. The draft includes provisions that would require tech companies to issue reminders to users after two hours of continuous interaction with AI and mandates security assessments for chatbots with substantial user bases—specifically those with over one million registered users or 100,000 monthly active users. Interestingly, the draft encourages the use of human-like AI in areas such as cultural promotion and companionship for the elderly. This regulatory push follows the recent IPO filings of two prominent AI chatbot startups, Z.ai and Minimax, in Hong Kong. Minimax's Talkie AI app, which allows conversations with virtual characters, has seen substantial success, generating over a third of the company's revenue in the first three quarters of the year. Z.ai, also known as Knowledge Atlas Technology, has not disclosed its user numbers but claims its technology powers approximately 80 million devices, including smartphones and smart vehicles. As the public awaits clarity on how these regulations might impact upcoming IPOs, the scrutiny of AI's influence on human behavior has intensified this year. Sam Altman, CEO of OpenAI, highlighted the challenges surrounding AI’s responses to sensitive topics like suicide, especially following a lawsuit in the U.S. after a tragic incident involving a teenager. In response to these growing concerns, OpenAI recently announced plans to hire a 'Head of Preparedness' to evaluate the mental health and cybersecurity risks associated with AI. As society increasingly turns to AI for social interactions—evidenced by a woman in Japan marrying her AI boyfriend—platforms dedicated to virtual character interactions, such as Character.ai and Polybuzz.ai, have gained significant popularity. The proposed regulations emerge as part of China's broader initiative to establish a framework for AI governance on a global scale.
During an interview with CNBC, Palantir's CEO Alex Karp emphasized the significant advantage that artificial intelligenc...
CNBC | Mar 12, 2026, 22:05
Substack is making significant strides in the realm of video content with the introduction of its new Substack Recording...
TechCrunch | Mar 12, 2026, 18:45
Nvidia is set to launch its annual GTC developer conference next week in San Jose, California, with the highly anticipat...
TechCrunch | Mar 12, 2026, 23:45
Lucid Motors has introduced an innovative robotaxi concept named the "Lucid Lunar" during its recent investor day in New...
TechCrunch | Mar 12, 2026, 17:45
Sam Altman, the CEO of OpenAI, recently engaged in a crucial dialogue with several lawmakers in Washington, D.C., where ...
CNBC | Mar 12, 2026, 20:25