
This week, OpenAI launched its latest coding model, GPT-5.1-Codex-Max, specifically designed for complex and extended software development tasks. The new model has been characterized as faster, smarter, and more efficient throughout the development process. It is now available across all platforms utilizing Codex technology. The release of GPT-5.1-Codex-Max follows closely on the heels of Google's announcement of Antigravity, its own developer-centric AI platform, igniting a competitive landscape between these two leading AI giants in the realm of software development. Built on the advanced GPT-5.1 architecture, Codex Max was trained with real-world software engineering experiences, including tasks such as code reviews, website creation, and addressing technical queries. These enhancements allow it to outperform previous models from OpenAI in rigorous coding evaluations, making it highly effective in everyday scenarios. Notably, GPT-5.1-Codex-Max is the first iteration to seamlessly operate on Windows, enhancing its usability as a collaborative tool, particularly when utilized with the Codex command-line interface. In internal assessments, Codex-Max demonstrated remarkable self-improvement capabilities, managing to refine its code over lengthy periods, even exceeding 24 hours on certain tasks. CEO Sam Altman praised the development team for their remarkable progress, expressing confidence that this model will stand out as one of the most significant contributions in the software development field. The new coding model is accessible to various user tiers, including ChatGPT Plus, Pro, Business, Edu, and Enterprise plans. For developers utilizing the Codex CLI through API keys, access will be granted once API support is fully implemented. GPT-5.1-Codex-Max will replace the previous Codex model as the default in all Codex interfaces. OpenAI disclosed that a significant 95% of its internal engineering team employs Codex on a weekly basis, resulting in a notable 70% increase in pull requests since its adoption. The improvements in GPT-5.1-Codex-Max are evident in performance metrics, achieving a 79.9% accuracy rate in the SWE-Lancer coding test compared to 66.3% by its predecessor, GPT-5.1-Codex. Additionally, in the SWE-bench Verified test, the new model not only solved more problems but did so with greater accuracy while consuming approximately 30% fewer 'thinking tokens,' indicating a boost in speed and efficiency. These efficiency gains are projected to reduce costs for developers. For instance, the model was able to create a complete browser-based CartPole reinforcement learning sandbox using 27,000 thinking tokens, a significant decrease from the 37,000 used by the earlier Codex version. Furthermore, OpenAI is introducing an extra-high reasoning option for tasks that are not sensitive to latency, allowing the model to engage in deeper analysis before generating responses.
Anthropic, the American AI firm, finds itself at a pivotal moment with the Pentagon, facing a deadline of 5:01 PM ET to ...
CNN | Feb 27, 2026, 05:10
Andrej Karpathy, a notable figure in the AI landscape, is making waves with his insights into the rapid evolution of pro...
Business Insider | Feb 27, 2026, 09:20India's technology sector is poised for a critical transformation. With a valuation exceeding $300 billion, accounting f...
Business Today | Feb 27, 2026, 06:20
In a significant move, Anthropic's CEO Dario Amodei declared that the company cannot agree to the Pentagon's terms regar...
Business Insider | Feb 27, 2026, 08:55Plaid, the fintech innovator that bridges financial applications with users' bank accounts for seamless payments and dat...
TechCrunch | Feb 27, 2026, 08:00