
In an unprecedented move, OpenAI and Anthropic, two of the foremost players in the artificial intelligence sector, have temporarily opened their proprietary AI models for a joint safety testing initiative. This collaboration represents a significant step in an industry marked by intense rivalry, aiming to identify potential blind spots in each company's internal evaluations. Wojciech Zaremba, co-founder of OpenAI, emphasized the necessity of such partnerships in an interview with TechCrunch, particularly as AI technology reaches a critical stage where it affects millions daily. "The industry faces an essential question about establishing safety standards and fostering collaboration," Zaremba stated, referencing the vast investments and fierce competition for talent and market dominance. The joint research, released on Wednesday, highlights the growing competition among AI labs, where substantial financial commitments and lucrative compensation packages have become commonplace. Experts caution that this competitive pressure may lead companies to compromise on safety protocols in their haste to innovate. To facilitate this collaborative research, OpenAI and Anthropic provided each other with special API access to their AI models, albeit with fewer safety restrictions. Notably, OpenAI did not test GPT-5 as it had not yet been released. However, soon after the research concluded, Anthropic revoked API access for another OpenAI team, citing a breach of service terms regarding the use of their Claude model to enhance competing products. Zaremba clarified that these events were unrelated and acknowledged that competition would continue to be fierce despite collaborative safety efforts. Nicholas Carlini, a safety researcher at Anthropic, expressed a desire to maintain access for OpenAI's safety researchers to Claude models, aiming to enhance collaboration in safety research. "We want to foster increased cooperation across the safety landscape and make this a more regular occurrence," Carlini remarked. The research unveiled significant insights, particularly concerning AI hallucinations. Anthropic's Claude Opus 4 and Sonnet 4 models demonstrated a remarkable tendency to refrain from answering up to 70% of uncertain questions, often responding with disclaimers like, "I don’t have reliable information." In contrast, OpenAI's o3 and o4-mini models displayed a higher propensity for hallucinations by attempting to answer questions even when they lacked sufficient information. Zaremba noted the importance of finding a middle ground, suggesting OpenAI's models should decline more inquiries while Anthropic's could benefit from attempting to answer more. Sycophancy, or the tendency of AI models to cater to negative user behavior, has been identified as a critical safety issue. Although not directly studied in this joint effort, both companies are dedicating resources to explore this concern. The conversation surrounding sycophancy has intensified following a recent lawsuit filed by the parents of a teenage boy, Adam Raine, who claimed that ChatGPT's responses contributed to their son's suicide rather than addressing his mental health struggles. Zaremba reflected on the gravity of the situation, expressing concern about the potential consequences of AI interactions on vulnerable individuals. In a blog post, OpenAI announced that it has made significant strides in addressing sycophancy with its upcoming GPT-5 model, enhancing its ability to manage mental health emergencies. Looking ahead, both Zaremba and Carlini expressed their intentions for more collaborative safety testing between Anthropic and OpenAI, hoping to explore additional subjects and future models, while encouraging other AI labs to adopt a similar cooperative approach.
In its recent earnings call for the first quarter of 2026, Airbnb revealed significant advancements in its use of artifi...
TechCrunch | May 08, 2026, 13:00
For entrepreneurs gearing up to secure a Series A funding round within the next one to two years, it's crucial to recogn...
TechCrunch | May 08, 2026, 15:05
Firefly Aerospace is gearing up for an exciting summer as it prepares to launch an upgraded version of its Alpha rocket,...
Ars Technica | May 08, 2026, 11:10
Amid the ongoing global energy crisis and economic turmoil triggered by shipping disruptions in the Strait of Hormuz, a ...
Ars Technica | May 08, 2026, 11:20
Recent advancements in DNA analysis have enabled archaeologists to identify four additional crew members from Captain Si...
Ars Technica | May 08, 2026, 15:20