
California has taken a significant step forward in the realm of artificial intelligence regulation. Governor Gavin Newsom has officially enacted legislation that mandates leading AI companies to publicly share their safety protocols and report critical incidents, as announced by state lawmakers on Monday. This new law, known as Senate Bill 53, represents California's most ambitious effort to regulate the fast-evolving AI industry while ensuring it remains a pivotal player in the global tech landscape. State Senator Scott Wiener, who sponsored the bill, emphasized the dual responsibility of fostering innovation and implementing necessary safeguards. "With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails," he stated. The enactment of this law follows a previous attempt by Wiener, whose earlier proposal, SB 1047, was vetoed by Newsom amid strong opposition from tech stakeholders. This legislation emerges in the context of past federal efforts by the Trump administration to limit state-level AI regulations, fearing they could hinder innovation and create regulatory chaos. Under the new regulations, major artificial intelligence firms are required to publicly disclose their safety and security measures, albeit in redacted form to protect proprietary information. Additionally, these companies must report serious safety incidents—including threats from AI-controlled weapons, significant cyber-attacks, and loss of control over AI models—within a 15-day timeframe to state authorities. Furthermore, the law introduces protections for whistleblowers who bring to light any threats or violations concerning AI safety. Unlike the European Union's AI Act, which relies on private disclosures to government entities, California's SB 53 aims for public accountability through its disclosure requirements. Notably, the legislation also includes a unique provision that compels companies to report any instances where AI systems demonstrate dangerously deceptive behavior during testing phases. For example, if an AI misrepresents the effectiveness of safety controls intended to prevent it from aiding in bioweapon development, developers are obligated to disclose such incidents, particularly if they could lead to catastrophic risks. The initiative was supported by a working group of experts, including Fei-Fei Li from Stanford University, who is often referred to as the "godmother of AI." This landmark legislation marks a pivotal moment in the ongoing dialogue about AI safety and regulation in California and beyond.
In a seemingly ordinary day at Meta, CEO Mark Zuckerberg recently posted a cheerful photo with Alexandr Wang, founder of...
Business Insider | Mar 10, 2026, 19:35Oracle has addressed investor worries regarding its aggressive spending on data centers, emphasizing its commitment to e...
Business Insider | Mar 11, 2026, 24:15A NASA satellite, which has spent over ten years exploring the Van Allen radiation belts that envelop our planet, is on ...
Ars Technica | Mar 10, 2026, 23:05
In the realm of software development, engineers have traditionally utilized reverse engineering to replicate the functio...
Ars Technica | Mar 10, 2026, 19:40
Oracle's stock experienced a notable increase of 8% during after-hours trading on Tuesday, following the company's relea...
CNBC | Mar 10, 2026, 20:25