
In the wake of the assassination of Charlie Kirk, a prominent right-wing activist in the U.S., social media platforms have become a battleground for misinformation. Users seeking reliable information turned to AI chatbots, only to find themselves faced with conflicting and inaccurate responses that further fueled confusion online. Kirk, an ally of former President Donald Trump, was shot dead at a university in Utah. Just a day after the tragic incident, AI chatbot Perplexity mistakenly claimed that Kirk was alive and had not been shot, a statement flagged by the watchdog NewsGuard. Meanwhile, Grok, another AI chatbot developed by Elon Musk, dismissed genuine footage of the shooting as a satirical video, insisting that Kirk was merely performing for comedic effect. Adding to the chaos, Grok incorrectly identified a 77-year-old retired banker from Canada as the shooter, citing major news organizations like CNN and The New York Times, which had not reported such information. The real identity of the shooter remains unknown, and the misinformation only escalates the turmoil surrounding the assassination, with some right-wing figures calling for violent retaliation against the left. As misinformation proliferates, some conspiracy theorists have gone so far as to claim that the video of Kirk's shooting was fabricated using AI technology, arguing that the incident was staged. This highlights a dangerous trend where the availability of AI tools can undermine trust in genuine content, a phenomenon researchers have dubbed the 'liar's dividend.' Experts like Hany Farid from GetReal Security affirm that their analysis of the shooting videos shows no signs of manipulation. He warns that this misuse of technology complicates the landscape of misinformation, making it harder to discern fact from fiction. The prevalence of falsehoods in the digital realm underscores a growing crisis of trust in institutions and media. Calls for improved AI detection tools are rising as major tech companies have reduced their investments in human fact-checking. A recent NewsGuard audit revealed that leading AI chatbots are now spreading misinformation at nearly double the rate of the previous year, a trend attributed to chatbots' increasing willingness to answer all queries, regardless of their validity.
Planet Labs, a prominent player in the commercial satellite imaging sector, announced on Friday that it will temporarily...
Ars Technica | Mar 06, 2026, 22:50
A team of researchers, headed by paleontologist Paul C. Sereno from the University of Chicago, has uncovered groundbreak...
Ars Technica | Mar 07, 2026, 12:35
The recent termination of NASA’s Exploration Upper Stage (EUS) marks a significant turning point in the landscape of spa...
Ars Technica | Mar 06, 2026, 23:45
The surge in artificial intelligence has led to an unprecedented acceleration in the growth of startups, many of which a...
Business Insider | Mar 07, 2026, 10:00In response to the growing trust issues caused by AI in the classroom, Ayşe Baltacıoğlu-Brammer, an assistant professor ...
Business Insider | Mar 07, 2026, 10:35