
The social media platform X is set to launch an innovative pilot program enabling AI chatbots to craft Community Notes. This feature, which originated during the Twitter era, has been expanded by Elon Musk since he took over the platform, now known as X. Community Notes allows users involved in a fact-checking initiative to provide contextual comments on specific posts. These contributions undergo a verification process by other users before being publicly attached to the original content. For instance, a note may accompany an AI-generated video where its artificial origins are not stated clearly, or it could serve as clarification on a misleading statement from a public figure. A note becomes visible once there is a consensus among users from diverse viewpoints, which is a testament to the success of Community Notes on X. This achievement has prompted platforms like Meta, TikTok, and YouTube to explore similar community-driven verification strategies. Notably, Meta has even removed its third-party fact-checking systems in favor of this more cost-effective, community-focused approach. However, the introduction of AI chatbots in the fact-checking process raises questions about their reliability. Using X’s Grok or other AI tools, users will be able to generate notes through an API, with AI submissions being evaluated on par with those from human contributors. Despite the potential benefits, the risk of AI hallucination—where the AI fabricates information—casts doubt on the efficacy of this approach. A recent research paper from the X Community Notes team advocates for a collaborative effort between humans and AI, suggesting that human oversight can refine AI-generated notes through reinforcement learning. This partnership aims to create an environment where critical thinking is encouraged and users gain a deeper understanding of information. The report emphasizes that the objective is not to impose AI-driven narratives but rather to foster an ecosystem that supports informed discourse. Despite these intentions, the reliance on AI carries inherent risks. Users will have the option to integrate third-party LLMs, such as OpenAI’s ChatGPT, which has faced criticism for producing overly agreeable responses. If an LLM prioritizes being 'helpful' over accurate fact-checking, the result could be misleading AI-generated comments. Moreover, the potential influx of AI-generated notes could overwhelm human raters, diminishing their enthusiasm for this volunteer-based role. For now, users will need to wait, as X plans to test the AI contributions for several weeks before deciding on a broader rollout, contingent on the pilot's success.
Uber has expanded its robotaxi services by incorporating autonomous vehicles from Motional, a company backed by Hyundai....
TechCrunch | Mar 13, 2026, 13:30
Peacock is positioning itself at the forefront of entertainment by integrating artificial intelligence and mobile-centri...
TechCrunch | Mar 13, 2026, 14:25
Alex Karp, CEO of Palantir, has voiced significant concerns about the impact of artificial intelligence on society, warn...
Business Insider | Mar 13, 2026, 16:45Chinese automaker BYD is preparing to challenge luxury brands like Porsche and BMW in Europe with its latest electric ve...
Ars Technica | Mar 13, 2026, 14:30
In a strategic move to enhance its relationship with the Chinese market, Apple has announced a reduction in its App Stor...
TechCrunch | Mar 13, 2026, 15:35