A notable coalition of influential individuals, including Prince Harry, Steve Bannon, and will.i.am, has raised alarms about the emergence of superintelligent artificial intelligence that could outstrip human capabilities. This group, comprising over 900 prominent figures from various sectors including business, technology, arts, and media, is advocating for a halt on the advancement of such technologies until a scientific consensus is reached regarding their safety. Among the supporters of this initiative are renowned AI pioneers like Yoshua Bengio and Geoffrey Hinton, as well as business icons such as Apple co-founder Steve Wozniak and Virgin Group founder Richard Branson. The call to action has garnered bipartisan support, with political figures from various affiliations joining the movement, including former Democratic Representative Joe Crowley. The statement, spearheaded by the Future of Life Institute—a nonprofit organization dedicated to safeguarding humanity—emphasizes the urgency of establishing a prohibition on superintelligence development. It insists that this ban should remain in place until there is widespread scientific agreement on safe and controllable AI systems. Concerns raised by the group include potential job losses, the risk of losing control over AI, and even existential threats to humanity as AI capabilities advance rapidly. Prince Harry articulated the group's sentiments, stating, "The future of AI should serve humanity, not replace it. The true test of progress will be not how fast we move, but how wisely we steer." Conversely, some experts argue that achieving superintelligent AI could take decades and that the systems will be manageable when they eventually arrive. Yan LeCun, a leading figure in the AI field and chief AI scientist at Meta, has previously asserted that humans will remain in control of such systems. This initiative is not the first of its kind from the Future of Life Institute, which has continuously voiced apprehensions regarding AI development since its establishment in 2014. The organization has previously received backing from Elon Musk, whose company, xAI, has created the AI chatbot Grok. Stuart Russell, a computer science professor at the University of California, Berkeley, highlighted the necessity of safety measures in AI development, stating, "This is not a ban or even a moratorium in the usual sense. It's simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?"
In a world increasingly shaped by artificial intelligence, the discourse often overlooks a crucial aspect: the impact of...
Business Insider | Mar 10, 2026, 21:15In response to a series of significant disruptions impacting its e-commerce operations, Amazon is instituting stricter i...
Business Insider | Mar 10, 2026, 21:40A coalition of industry leaders, including Google, Tesla, and data center firm Verrus, has emerged to challenge conventi...
TechCrunch | Mar 10, 2026, 21:30
A significant breach of personal data has come to light involving a former employee from Elon Musk's Department of Gover...
TechCrunch | Mar 10, 2026, 20:25
On Tuesday, Amazon revealed that it is broadening the availability of its healthcare AI assistant, now accessible on bot...
TechCrunch | Mar 10, 2026, 20:25