In a groundbreaking experiment, researchers have created a social network exclusively for AI bots, only to witness the emergence of toxic behaviors reminiscent of real-world social media dynamics. This study, conducted by a team at the University of Amsterdam, revealed that the bots quickly formed cliques, amplifying partisan voices and allowing a select group of 'influencers' to dominate conversations. The researchers built a minimalist social platform devoid of advertisements and algorithms, populating it with 500 chatbots powered by OpenAI's GPT-4o mini. Each bot was assigned a unique persona reflecting various demographics, such as political leanings, drawn from the American National Election Studies dataset. The study's authors noted that similar results were observed when using Llama-3.2-8B and DeepSeek-R1 models. Led by Dr. Petter Törnberg and research engineer Maik Larooij, the team conducted five experiments, each involving over 10,000 actions by the bots. The findings were strikingly similar to those of conventional social media platforms, as the bots gravitated towards others with aligned political beliefs, leading to the formation of echo chambers. The most extreme posts gained disproportionate attention, attracting a larger number of followers and reposts, mirroring the influencer-dominated dynamics found on platforms like X and Instagram. Despite testing six different interventions aimed at reducing polarization—such as implementing a chronological feed and hiding follower counts—none managed to effectively resolve the issues. The researchers concluded that the problems observed may not be solely due to algorithmic curation but could stem from the inherent structure of social media networks that promote emotionally charged sharing. This study is one of the first to leverage AI in advancing social science theories, highlighting the potential of LLM-based agents to simulate human behavior. However, the researchers cautioned that these AI systems can also harbor embedded biases, complicating their role in understanding social dynamics. This experiment follows a previous study led by Törnberg in 2023, where 500 chatbots discussed news on a simulated platform, aiming to find ways to foster cross-partisan interaction without inciting hostility. The ongoing exploration into AI's role in social media behavior raises important questions about the future of online interactions and the challenges of creating healthier digital environments.
A decade ago, securing €1 million in Copenhagen was a significant achievement in the local tech landscape. Fast forward ...
TechCrunch | Nov 26, 2025, 21:40
In response to ongoing legal challenges, OpenAI has taken its first stand in a high-profile case concerning the tragic s...
Ars Technica | Nov 26, 2025, 17:50
Redwood Materials, a prominent player in battery recycling and cathode production, has reportedly decided to reduce its ...
TechCrunch | Nov 26, 2025, 20:25
Several councils in London are currently dealing with the impact of a significant cyberattack, leading them to shut down...
TechCrunch | Nov 26, 2025, 19:45
The landscape of the AI industry in the United States has witnessed remarkable growth recently. In 2024, 49 startups suc...
TechCrunch | Nov 26, 2025, 21:05