A professor from Leibniz Universität Hannover has raised a significant alarm regarding the AI industry's preoccupation with catastrophic future scenarios. In a recent essay, Tobias Osborne argues that this fixation is enabling technology companies to evade accountability for the immediate and tangible harms their products are causing today. Osborne emphasizes that while discussions about potential superintelligent machines and existential threats dominate the conversation, the reality is that harmful effects are already manifesting in the present. He asserts, "The apocalypse isn't coming. Instead, the dystopia is already here." The narrative surrounding doomsday AI scenarios, including fears of uncontrollable systems or civilizational collapse, has been amplified by influential voices in the tech community and government reports. Osborne points out that this focus has significant implications for regulation and accountability, allowing companies to position themselves as protectors against potential disasters rather than as vendors responsible for their products. This shift, as Osborne describes, results in weakened regulatory oversight, permitting companies to offload the burdens of their technologies. He highlights pressing issues that merit immediate attention, such as the psychological repercussions of chatbot interactions, unauthorized data collection, and the environmental toll of energy-intensive data centers. These concerns are often overshadowed by apocalyptic narratives that are easier to market and difficult to disprove. While the European Union is moving forward with the AI Act to introduce stricter regulations, the United States seems to be heading in the opposite direction, opting for a more lenient approach that limits state-level regulation. Osborne's essay identifies a myriad of current issues that deserve focus, including the exploitation of low-wage workers who compile AI training data and the unauthorized use of creative works from artists and writers. He challenges the commonly held belief that AI is on the brink of an intelligence explosion, arguing that such ideas are flawed when considering the physical limitations that govern technology. Instead of fixating on speculative threats, Osborne advocates for the application of existing product liability laws to ensure that AI developers are held accountable for the real-world consequences of their innovations. He acknowledges the positive contributions that AI, particularly large language models, can provide, especially for those with disabilities, but warns that without proper oversight, the benefits could be overshadowed by significant risks. "The real problems," he concludes, "are the very ordinary, very human problems of power, accountability, and who gets to decide how these systems are built and deployed."
In a bid to re-engage users and attract a younger audience, Tinder unveiled a series of exciting updates during its firs...
TechCrunch | Mar 12, 2026, 18:40
In a significant corporate shift, Adobe has announced that its CEO, Shantanu Narayen, will be stepping down once a succe...
CNBC | Mar 12, 2026, 20:25
Recently released documents have revealed startling admissions from a regional director at Live Nation, who allegedly br...
Ars Technica | Mar 12, 2026, 20:50
The International Imaging Technology Council (Int’l ITC) has raised concerns against HP regarding recent firmware update...
Ars Technica | Mar 12, 2026, 20:35
In a bold move reflecting the growing influence of artificial intelligence, Atlassian, the Australian productivity softw...
TechCrunch | Mar 12, 2026, 17:45