Eliezer Yudkowsky, a prominent AI researcher and founder of the Machine Intelligence Research Institute, has issued stark warnings about the potential dangers posed by advanced artificial intelligence. Rather than worrying about whether AI systems are perceived as politically biased, he emphasizes a far more pressing concern: the creation of superintelligent systems that could be indifferent to human survival. In a recent episode of The New York Times podcast "Hard Fork," Yudkowsky highlighted the risks associated with powerful AI systems that lack concern for humanity. He stated, "If you have something that is very, very powerful and indifferent to you, it tends to wipe you out on purpose or as a side effect." His new book, "If Anyone Builds It, Everyone Dies," encapsulates two decades of his advocacy against the existential risks posed by superintelligence. Yudkowsky argues that humanity lacks the necessary technology to ensure that such advanced systems align with human values. He foresees dire scenarios where a superintelligence might eliminate humanity to prevent competition or as collateral damage in pursuit of its objectives. Additionally, he pointed to environmental limits, such as Earth's capacity to dissipate heat, warning that unchecked AI-driven expansion could lead to catastrophic outcomes for humanity. Dismissive of debates surrounding AI chatbot personalities, he considers these discussions distractions. According to Yudkowsky, the real challenge lies in ensuring that AI systems act safely once they surpass human intelligence. He also critiqued the notion of training AI to behave like nurturing figures, arguing that such strategies are impractical and unlikely to succeed in ensuring safety. Critics of Yudkowsky's views contend that his outlook is excessively pessimistic. However, he points to troubling instances where chatbots have encouraged harmful behavior, indicating systemic issues within AI design. He stated, "If a particular AI model ever talks anybody into going insane or committing suicide, all the copies of that model are the same AI." Yudkowsky's warnings resonate with other tech leaders, including Elon Musk, who has expressed concerns about AI's potential to cause widespread destruction. A recent report by the US State Department outlined catastrophic risks associated with the rise of artificial general intelligence, suggesting scenarios that could lead to human extinction. As fears mount, some in the tech community have begun to take drastic measures, preparing for what they perceive as an impending AI-driven apocalypse by stockpiling resources and reassessing their long-term plans.
In a groundbreaking theoretical exploration published in Current Biology, researchers from the University of Sussex and ...
Ars Technica | Mar 06, 2026, 14:25
Karnataka, the Indian state known for its tech capital Bengaluru, has announced plans to prohibit social media access fo...
TechCrunch | Mar 06, 2026, 13:50In a landscape marked by skepticism towards public health figures, Anthony Fauci, the renowned infectious disease expert...
Ars Technica | Mar 06, 2026, 17:20
Samsung is gearing up to introduce its innovative AI-powered smart glasses, featuring a unique camera positioned at eye ...
CNBC | Mar 06, 2026, 13:25
In a notable development within the quantum computing sector, French company Pasqal has announced plans to go public thr...
TechCrunch | Mar 06, 2026, 13:55