
On Monday, Sam Altman, a prominent figure in the tech community and a stakeholder in Reddit, expressed a striking realization: the increasing presence of bots has made it challenging to discern whether social media posts are genuinely authored by humans. This insight struck him while he was reading posts in the r/Claudecode subreddit, where users were enthusiastically discussing OpenAI Codex, a programming service launched to compete with Anthropic’s Claude Code. The subreddit has recently been inundated with posts from self-identified Codex users celebrating their switch from other platforms. One user humorously queried whether it was possible to transition to Codex without making a post about it on Reddit. Altman reflected on this trend, admitting that it led him to question the authenticity of the posts he encountered. "I’ve had the strangest experience reading this: I assume it’s all fake/bots, even though in this case, I know Codex growth is really strong and the trend here is real," he shared on X. In his analysis, Altman suggested several contributing factors. He noted that real users might be adopting the communication styles typical of large language models (LLMs), while the highly engaged online community often operates in correlated manners. He pointed out that the pressure on social media platforms to boost engagement, coupled with the monetization strategies for content creators, can lead to unusual behaviors by users. Additionally, he raised concerns about potential astroturfing—where posts are artificially generated to create a false impression—which could further muddy the waters regarding genuine user interaction. Despite the lack of concrete evidence, Altman’s comments resonate with a growing sentiment that social media has become increasingly artificial. He highlighted that fandoms can behave erratically, especially in environments dominated by vocal critics. After the release of GPT-5.0, OpenAI's own community turned critical, with users posting complaints about various aspects of the new model, leading to a noticeable decline in positive feedback. Altman noted, "The net effect is that AI-driven platforms like Twitter and Reddit feel quite fake in a way they didn't a couple of years ago." This observation raises questions about the implications of LLMs in the digital landscape. A report from data security firm Imperva indicated that over half of internet traffic in 2024 was non-human, a trend largely attributed to the prevalence of bots. Furthermore, estimates suggest there are hundreds of millions of bots on X alone. Speculation has emerged that Altman's comments could hint at OpenAI's interest in developing its own social media platform to contend with giants like X and Facebook. If such a platform were in the works, one might wonder whether it would manage to maintain a bot-free environment. Interestingly, studies have shown that even networks composed entirely of bots can lead to the formation of cliques and echo chambers, suggesting that the challenge of fostering genuine interactions in social media may persist, regardless of the participants' nature.
The recent termination of NASA’s Exploration Upper Stage (EUS) marks a significant turning point in the landscape of spa...
Ars Technica | Mar 06, 2026, 23:45
David Barnett's journey with PopSockets, a sensation in phone accessories, began over ten years ago when he sought a sim...
TechCrunch | Mar 07, 2026, 19:00
Caitlin Kalinowski, who headed the robotics division at OpenAI after joining from Meta in 2024, has announced her resign...
Business Insider | Mar 07, 2026, 17:45Retail investors have long been excluded from the startup investment scene, but Robinhood is attempting to revolutionize...
TechCrunch | Mar 07, 2026, 02:20
Caitlin Kalinowski, the head of OpenAI's robotics division, has stepped down from her position, citing ethical concerns ...
TechCrunch | Mar 07, 2026, 20:55