OpenAI has unveiled two new open-source reasoning AI models, gpt-oss-120b and gpt-oss-20b, the company’s first open-source models since GPT-2, released over five years ago. Both are available for free on Hugging Face and are aimed at developers and researchers looking to build their own solutions based on open models.
Models differ in power and equipment requirements:
- gpt-oss-120b is a larger and more productive model that can run on a single NVIDIA H100 chip;
- gpt-oss-20b is a lightweight version that can run on devices with 16 GB of RAM.
OpenAI's goal is to offer an American open AI platform as an alternative to the growing influence of Chinese labs DeepSeek, Qwen (Alibaba), and Moonshot AI, which are actively developing powerful open models.
As for testing, on the Codeforces competitive coding test, the 120b scored 2622 points, while the 20b scored 2516, surpassing DeepSeek R1 but falling behind the closed models o3 and o4-mini. On the complex Humanity's Last Exam (HLE) test, the 120b scored 19%, while the 20b scored 17.3%, which is better than other open models but lower than o3.
The new models were trained using a methodology similar to OpenAI’s closed-loop models. They use a mixture-of-experts (MoE) approach, activating only a subset of the parameters for each token, which improves efficiency. Additional reinforcement learning allowed the models to learn to build chains of logical reasoning and invoke tools like web searches or Python code execution.
The models work only with text, do not generate images or audio, and are released under the Apache 2.0 license, which allows commercial use without permission from OpenAI, although the training data remains private due to the risk of copyright lawsuits.
The launch of gpt-oss is designed to simultaneously strengthen OpenAI's position in the developer community and respond to political pressure from the United States, which seeks to strengthen the role of open American models in global competition.