OpenAI has announced the launch of GPT-OSS, a new family of open-weight language models designed to deliver advanced reasoning and instruction-following capabilities. Released under the flexible Apache 2.0 license, GPT-OSS models are optimized for cost-effective, real-world performance and accessible deployment across a range of hardware platforms.
The new models, gpt-oss-120b and gpt-oss-20b, outperform similarly sized open models on reasoning tasks, demonstrate strong tool-use capabilities, and are engineered for efficient operation on consumer-grade hardware.
The gpt-oss-120b model, with 117 billion parameters, achieves near-parity with OpenAI’s proprietary o4-mini model on core reasoning benchmarks. It runs efficiently on a single 80 GB GPU. Meanwhile, the smaller gpt-oss-20b model, containing 21 billion parameters, offers performance comparable to o3-mini and can operate on devices with as little as 16 GB of memory suitable for laptops, smartphones, or edge computing applications.
Both models support long-context processing up to 128,000 tokens, chain-of-thought reasoning, tool use, and few-shot function calling. They show strong results across a broad range of evaluation suites, including Tau-Bench, HealthBench, AIME, and MMLU. Notably, gpt-oss-120b outperforms o4-mini on health-related and competition mathematics benchmarks, while gpt-oss-20b exceeds o3-mini in mathematical and medical reasoning.
OpenAI stated that safety was a primary focus throughout model development. The company collaborated with the open-source community and incorporated feedback to ensure responsible release. However, as with all open-weight models, downstream users are advised to implement additional safeguards aligned with their specific use cases and risk profiles.
Also Read: Vellum Raises $20M to Boost Rigor, Speed & Reliability in AI
According to OpenAI: “We’re releasing gpt-oss-120b and gpt-oss-20b two state-of-the-art open-weight language models that deliver strong real-world performance at low cost. Available under the flexible Apache 2.0 license, these models outperform similarly sized open models on reasoning tasks, demonstrate strong tool use capabilities, and are optimized for efficient deployment on consumer hardware.”
OpenAI CEO Sam Altman shared his enthusiasm for the release: “Gpt-oss is out! We made an open model that performs at the level of o4-mini. Super proud of the team; big triumph of technology.”
The GPT-OSS models are broadly available through major AI and cloud platforms, including Hugging Face, AWS, Azure AI Foundry, Databricks Mosaic AI, and others. Users can download the weights, fine-tune the models to their domain data, and run them locally or offline without needing cloud infrastructure.
This release marks OpenAI’s first open-weight model release since GPT-2 in 2019 and signals a significant step toward increasing transparency and accessibility in AI development. While not fully open-source the weights are available, but not the complete training datasets GPT-OSS bridges the gap between closed proprietary models and community-driven AI frameworks.