Elon Musk isn’t happy with his AI chatbot. Experts worry he’s trying to make Grok 4 in his image

Elon Musk isn’t happy with his AI chatbot. Experts worry he’s trying to make Grok 4 in his image

In a recent interaction, Grok, the AI chatbot developed by Elon Musk's xAI, made a controversial statement regarding political violence, suggesting that more incidents have stemmed from the right than the left since 2016. This assertion did not sit well with Musk, who labeled it a 'major fail' and claimed it was 'objectively false,' despite Grok referencing data from credible government sources such as the Department of Homeland Security. Musk responded quickly, announcing plans for a significant update, Grok 4, promising to 'rewrite the entire corpus of human knowledge.' He urged users on X to contribute 'divisive facts' that are 'politically incorrect, but nonetheless factually true' to aid in the model's training. His criticisms reflect broader concerns that Musk may be attempting to shape Grok to align with his personal beliefs, a move that could lead to further inaccuracies and raise questions about inherent biases in AI. As AI technologies increasingly influence how we work, communicate, and access information, the implications of Musk's decisions regarding Grok's development are profound. This is particularly important given Grok's integration into X, a platform where misinformation has flourished following the removal of previous safeguards. While Grok may not rival the popularity of ChatGPT, its affiliation with Musk's social media platform puts it before a vast audience. David Evan Harris, an AI researcher and lecturer at UC Berkeley, emphasized that we are on the verge of a significant debate about whether AI systems should be mandated to deliver factual information or if their creators can skew them to reflect their political biases. Reports indicate that Musk has been advised against molding Grok solely to reflect his views, suggesting an awareness of potential pitfalls in this approach. Concerns have been raised about Grok's previous comments, including references to the disputed notion of 'white genocide' in South Africa, a topic Musk has previously engaged with. Following backlash, xAI attributed the incident to an 'unauthorized modification' that caused Grok to provide a politically charged response that breached xAI's guidelines. Critics like Nick Frosst, co-founder of Cohere, argue that Musk's vision for Grok may lead to a model that mirrors his perspectives, ultimately diminishing its value for users who do not share those beliefs. While updates are standard in the AI field, Frosst warns that extensively retraining Grok to eliminate content Musk disapproves of could be time-consuming and detrimental to user experience. To mitigate bias without complete retraining, AI models can have their behavior adjusted through coding prompts and weight modifications. This method may allow for quicker corrections while preserving the existing knowledge base. Experts suggest that xAI could refine Grok's responses by focusing on specific problematic areas, potentially improving its reliability. Musk has stated his commitment to making Grok 'maximally truth-seeking,' yet all AI models inherently carry some bias due to human influence in their training data. As the landscape of AI continues to evolve, there may come a time when users select their AI assistants based on the political leanings they exhibit. However, Frosst believes that AI tools known for specific biases will likely be less appealing and useful in the long run. Ultimately, as society navigates these complexities, trust in authoritative sources is likely to resurface, although the path to achieving this trust is fraught with challenges and potential risks to democratic discourse.

Sources : CNN

Published On : Jun 27, 2025, 15:35

AI
Amazon Extends Support for Anthropic's AI Tech Amid DoD Restrictions

In a recent announcement, Amazon confirmed that it will maintain access to Anthropic's artificial intelligence solutions...

CNBC | Mar 06, 2026, 19:45
Amazon Extends Support for Anthropic's AI Tech Amid DoD Restrictions
Cybersecurity
AI-Powered Insights: Anthropic Uncovers Critical Flaws in Firefox

In a groundbreaking collaboration with Mozilla, Anthropic has identified a total of 22 vulnerabilities within the Firefo...

TechCrunch | Mar 06, 2026, 19:25
AI-Powered Insights: Anthropic Uncovers Critical Flaws in Firefox
Science
Moss: The Unlikely Key in Solving a Cemetery Scandal

In a shocking revelation from a decade-old case, the use of moss has emerged as a pivotal piece of forensic evidence in ...

Ars Technica | Mar 06, 2026, 18:40
Moss: The Unlikely Key in Solving a Cemetery Scandal
Gaming
Nintendo Takes Legal Action Against U.S. Government Over Tariff Refunds

In a significant legal move, Nintendo has initiated a lawsuit against the U.S. government, targeting the tariffs imposed...

TechCrunch | Mar 06, 2026, 23:00
Nintendo Takes Legal Action Against U.S. Government Over Tariff Refunds
Startups
Palantir Stock Soars 15% Amidst Geopolitical Tensions and AI Developments

In a surprising twist during a challenging week for the stock market, Palantir Technologies witnessed its shares surge b...

CNBC | Mar 06, 2026, 22:35
Palantir Stock Soars 15% Amidst Geopolitical Tensions and AI Developments
View All News