
This week, OpenAI launched its latest coding model, GPT-5.1-Codex-Max, specifically designed for complex and extended software development tasks. The new model has been characterized as faster, smarter, and more efficient throughout the development process. It is now available across all platforms utilizing Codex technology. The release of GPT-5.1-Codex-Max follows closely on the heels of Google's announcement of Antigravity, its own developer-centric AI platform, igniting a competitive landscape between these two leading AI giants in the realm of software development. Built on the advanced GPT-5.1 architecture, Codex Max was trained with real-world software engineering experiences, including tasks such as code reviews, website creation, and addressing technical queries. These enhancements allow it to outperform previous models from OpenAI in rigorous coding evaluations, making it highly effective in everyday scenarios. Notably, GPT-5.1-Codex-Max is the first iteration to seamlessly operate on Windows, enhancing its usability as a collaborative tool, particularly when utilized with the Codex command-line interface. In internal assessments, Codex-Max demonstrated remarkable self-improvement capabilities, managing to refine its code over lengthy periods, even exceeding 24 hours on certain tasks. CEO Sam Altman praised the development team for their remarkable progress, expressing confidence that this model will stand out as one of the most significant contributions in the software development field. The new coding model is accessible to various user tiers, including ChatGPT Plus, Pro, Business, Edu, and Enterprise plans. For developers utilizing the Codex CLI through API keys, access will be granted once API support is fully implemented. GPT-5.1-Codex-Max will replace the previous Codex model as the default in all Codex interfaces. OpenAI disclosed that a significant 95% of its internal engineering team employs Codex on a weekly basis, resulting in a notable 70% increase in pull requests since its adoption. The improvements in GPT-5.1-Codex-Max are evident in performance metrics, achieving a 79.9% accuracy rate in the SWE-Lancer coding test compared to 66.3% by its predecessor, GPT-5.1-Codex. Additionally, in the SWE-bench Verified test, the new model not only solved more problems but did so with greater accuracy while consuming approximately 30% fewer 'thinking tokens,' indicating a boost in speed and efficiency. These efficiency gains are projected to reduce costs for developers. For instance, the model was able to create a complete browser-based CartPole reinforcement learning sandbox using 27,000 thinking tokens, a significant decrease from the 37,000 used by the earlier Codex version. Furthermore, OpenAI is introducing an extra-high reasoning option for tasks that are not sensitive to latency, allowing the model to engage in deeper analysis before generating responses.
Anthropic has unveiled an innovative update to its AI model, Claude, aimed at transforming the way developers interact w...
TechCrunch | Mar 24, 2026, 21:05
A federal investigation has been initiated into a Southern California school district due to its questionable connection...
Business Insider | Mar 24, 2026, 21:35Ida Huddleston, an 82-year-old farmer from Northern Kentucky, has made headlines for her resolute decision to reject a l...
TechCrunch | Mar 24, 2026, 22:55
OpenAI has announced plans to discontinue Sora, its video generation application, just 15 months after its much-anticipa...
Ars Technica | Mar 24, 2026, 21:25
In a significant legal ruling, a jury in Santa Fe has ordered Meta to pay $375 million in civil penalties after determin...
TechCrunch | Mar 25, 2026, 24:25