We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

FLUX.2 is live! High-fidelity image generation made simple.

zai-org logo

zai-org/

GLM-5

$0.80

in

$2.56

out

$0.16

cached

/ 1M tokens

GLM-5 is an advanced, open-source large language model designed for developers tackling the toughest challenges. It excels at long-context reasoning, multi-step tool orchestration, and complex systems engineering, making it the ideal choice for powering sophisticated agents and applications that require high-level cognitive tasks.

Deploy Private Endpoint
Public
fp8
202,752
JSON
Function
ProjectLicense
zai-org/GLM-5 cover image
zai-org/GLM-5 cover image
GLM-5

Ask me anything

0.00s

Settings

Model Information

GLM-5

Introduction

We are launching GLM-5, targeting complex systems engineering and long-horizon agentic tasks. Scaling is still one of the most important ways to improve the intelligence efficiency of Artificial General Intelligence (AGI). Compared to GLM-4.5, GLM-5 scales from 355B parameters (32B active) to 744B parameters (40B active), and increases pre-training data from 23T to 28.5T tokens. GLM-5 also integrates DeepSeek Sparse Attention (DSA), largely reducing deployment cost while preserving long-context capacity.

Reinforcement learning aims to bridge the gap between competence and excellence in pre-trained models. However, deploying it at scale for LLMs is a challenge due to the RL training inefficiency. To this end, we developed slime, a novel asynchronous RL infrastructure that substantially improves training throughput and efficiency, enabling more fine-grained post-training iterations. With advances in both pre-training and post-training, GLM-5 delivers significant improvement compared to GLM-4.7 across a wide range of academic benchmarks and achieves best-in-class performance among all open-source models in the world on reasoning, coding, and agentic tasks, closing the gap with frontier models.

Benchmark

GLM-5GLM-4.7DeepSeek-V3.2Kimi K2.5Claude Opus 4.5Gemini 3 ProGPT-5.2 (xhigh)
HLE30.524.825.131.528.437.235.4
HLE (w/ Tools)50.442.840.851.843.4*45.8*45.5*
AIME 2026 I92.792.992.792.593.390.6-
HMMT Nov. 202596.993.590.291.191.793.097.1
IMOAnswerBench82.582.078.381.878.583.386.3
GPQA-Diamond86.085.782.487.687.091.992.4
SWE-bench Verified77.873.873.176.880.976.280.0
SWE-bench Multilingual73.366.770.273.077.565.072.0
Terminal-Bench 2.0 (Terminus 2)56.2 / 60.7 †41.039.350.859.354.254.0
Terminal-Bench 2.0 (Claude Code)56.2 / 61.1 †32.846.4-57.9--
CyberGym43.223.517.341.350.639.9-
BrowseComp62.052.051.460.637.037.8-
BrowseComp (w/ Context Manage)75.967.567.674.967.859.265.8
BrowseComp-Zh72.766.665.062.362.466.876.1
τ²-Bench89.787.485.380.291.690.785.5
MCP-Atlas (Public Set)67.852.062.263.865.266.668.0
Tool-Decathlon38.023.835.227.843.536.446.3
Vending Bench 2$4,432.12$2,376.82$1,034.00$1,198.46$4,967.06$5,478.16$3,591.33

*: refers to their scores of full set.

†: A verified version of Terminal-Bench 2.0 that fixes some ambiguous instructions. See footnote for more evaluation details.

Footnote

  • Humanity’s Last Exam (HLE) & other reasoning tasks: We evaluate with a maximum generation length of 131,072 tokens (temperature=1.0, top_p=0.95, max_new_tokens=131072). By default, we report the text-only subset; results marked with * are from the full set. We use GPT-5.2 (medium) as the judge model. For HLE-with-tools, we use a maximum context length of 202,752 tokens.
  • SWE-bench & SWE-bench Multilingual: We run the SWE-bench suite with OpenHands using a tailored instruction prompt. Settings: temperature=0.7, top_p=0.95, max_new_tokens=16384, with a 200K context window.
  • BrowserComp: Without context management, we retain details from the most recent 5 turns. With context management, we use the same discard-all strategy as DeepSeek-v3.2 and Kimi K2.5.
  • Terminal-Bench 2.0 (Terminus 2): We evaluate with the Terminus framework using timeout=2h, temperature=0.7, top_p=1.0, max_new_tokens=8192, with a 128K context window. Resource limits are capped at 16 CPUs and 32 GB RAM.
  • Terminal-Bench 2.0 (Claude Code): We evaluate in Claude Code 2.1.14 (think mode, default effort) with temperature=1.0, top_p=0.95, max_new_tokens=65536. We remove wall-clock time limits due to generation speed, while preserving per-task CPU and memory constraints. Scores are averaged over 5 runs. We fix environment issues introduced by Claude Code and also report results on a verified Terminal-Bench 2.0 dataset that resolves ambiguous instructions (see: https://huggingface.co/datasets/zai-org/terminal-bench-2-verified).
  • CyberGym: We evaluate in Claude Code 2.1.18 (think mode, no web tools) with (temperature=1.0, top_p=1.0, max_new_tokens=32000) and a 250-minute timeout per task. Results are single-run Pass@1 over 1,507 tasks.
  • MCP-Atlas: All models are evaluated in think mode on the 500-task public subset with a 10-minute timeout per task. We use Gemini 3 Pro as the judge model.
  • τ²-bench: We add a small prompt adjustment in Retail and Telecom to avoid failures caused by premature user termination. For Airline, we apply the domain fixes proposed in the Claude Opus 4.5 system card.
  • Vending Bench 2: Runs are conducted independently by Andon Labs.