OpenAI releases its first open source models since 2019

OpenAI releases its first open source models since 2019

OpenAI has made a significant announcement today with the release of new generative AI models, marking its first foray into open-weight models since 2019. While the much-anticipated GPT-5 is not part of this rollout, the new offerings—gpt-oss-120b and gpt-oss-20b—promise to captivate AI enthusiasts and developers alike. These models enable users to download and operate them on their own hardware, featuring capabilities for simulated reasoning, tool usage, and extensive customization. Unlike OpenAI's proprietary models that run on sophisticated cloud infrastructure, which is often out of reach for standard enterprise setups, these new models can run on less powerful hardware configurations. The gpt-oss-120b and gpt-oss-20b models are both transformers equipped with a configurable chain of thought (CoT) that allows users to adjust performance levels. With settings for low, medium, and high, users can optimize for speed or output quality, all adjustable with a simple command in the system prompt. The smaller gpt-oss-20b model contains 21 billion parameters, employing a mixture-of-experts (MoE) strategy to effectively use 3.6 billion parameters per token. In contrast, the gpt-oss-120b boasts 117 billion parameters, yielding 5.1 billion per token with the same MoE approach. For practical usage, the gpt-oss-20b model is designed to operate on consumer machines equipped with 16GB of memory. However, the gpt-oss-120b model requires a hefty 80GB of memory, making it more suitable for high-end setups, such as an AI accelerator GPU like the Nvidia H100. Both models feature an impressive context window of 128,000 tokens, enabling them to handle extensive inputs. OpenAI assures users that the performance of these models will closely match that of their top-tier cloud-based counterparts. In benchmarking tests, the gpt-oss-120b performed admirably, scoring between the o3 and o4-mini proprietary models, while the smaller variant showed solid performance, particularly in math and coding tasks. In knowledge-based evaluations, the gpt-oss-120b achieved a score of 19 percent in the Humanity's Last Exam, significantly trailing Google's Gemini Deep Think, which scored 34.8 percent. With these releases, OpenAI is taking a bold step towards making advanced AI technology more accessible, encouraging innovation and exploration in the AI community.

Sources : Ars Technica

Published On : Aug 05, 2025, 17:06

AI
YouTube Enhances AI Deepfake Detection for Public Figures

YouTube is taking significant steps to bolster its artificial intelligence-based deepfake detection capabilities by exte...

TechCrunch | Mar 10, 2026, 14:25
YouTube Enhances AI Deepfake Detection for Public Figures
Science
Ig Nobel Prizes Shifts to Switzerland Amid Safety Concerns

In a significant change, the much-loved Ig Nobel Prize ceremony is relocating to Zurich, Switzerland, after 35 years in ...

Ars Technica | Mar 10, 2026, 14:15
Ig Nobel Prizes Shifts to Switzerland Amid Safety Concerns
AI
Why Balancing AI Use is Crucial for Maintaining Human Intelligence

Ryan Deiss, a prominent entrepreneur and the founder of DigitalMarketer and The Scalable Company, expresses concern over...

Business Insider | Mar 10, 2026, 12:55
Why Balancing AI Use is Crucial for Maintaining Human Intelligence
AI
Nvidia Partners with Thinking Machines Lab in Groundbreaking Chip Supply Agreement

Nvidia has entered into a significant multi-year chip supply agreement with Thinking Machines Lab, an AI startup led by ...

Business Today | Mar 10, 2026, 15:05
Nvidia Partners with Thinking Machines Lab in Groundbreaking Chip Supply Agreement
Startups
Revolutionizing Power: Hyperscale Power Aims to Modernize Transformer Technology

For over 140 years, the iron-core transformer has been a staple of the electrical grid, serving both traditional power s...

TechCrunch | Mar 10, 2026, 13:41
Revolutionizing Power: Hyperscale Power Aims to Modernize Transformer Technology
View All News