gpt-oss-20b is an open-weight 21B parameter model released by OpenAI under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with 3.6B active parameters per forward pass, optimized for lower-latency inference. The model is trained in OpenAI’s Harmony response format and supports reasoning level configuration, fine-tuning, and agentic capabilities including function calling, tool use, and structured outputs.
gpt-oss-20b is an open-weight 21B parameter model released by OpenAI under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with 3.6B active parameters per forward pass, optimized for lower-latency inference. The model is trained in OpenAI’s Harmony response format and supports reasoning level configuration, fine-tuning, and agentic capabilities including function calling, tool use, and structured outputs.
gpt-oss-20b
Ask me anything
Welcome to the gpt-oss series, OpenAI’s open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases.
We’re releasing two flavors of these open models:
gpt-oss-120b
— for production, general purpose, high reasoning use cases that fit into a single H100 GPU (117B parameters with 5.1B active parameters)gpt-oss-20b
— for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)Both models were trained on our harmony response format and should only be used with the harmony format as it will not work correctly otherwise.
[!NOTE] This model card is dedicated to the larger
gpt-oss-120b
model. Check outgpt-oss-20b
for the smaller model.
gpt-oss-120b
run on a single H100 GPU and the gpt-oss-20b
model run within 16GB of memory.You can adjust the reasoning level that suits your task across three levels:
The reasoning level can be set in the system prompts, e.g., "Reasoning: high".
The gpt-oss models are excellent for:
Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
This larger model gpt-oss-120b
can be fine-tuned on a single H100 node, whereas the smaller gpt-oss-20b
can even be fine-tuned on consumer hardware.
Run models at scale with our fully managed GPU infrastructure, delivering enterprise-grade uptime at the industry's best rates.