We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

FLUX.2 is live! High-fidelity image generation made simple.

zai-org logo

zai-org/

GLM-4.7-Flash

$0.06

in

$0.40

out

/ 1M tokens

GLM-4.7-Flash is a 30B-A3B MoE model. As the strongest model in the 30B class, GLM-4.7-Flash offers a new option for lightweight deployment that balances performance and efficiency.

Deploy Private Endpoint
Public
blfoat16
202,752
JSON
Function
zai-org/GLM-4.7-Flash cover image
zai-org/GLM-4.7-Flash cover image
GLM-4.7-Flash

Ask me anything

0.00s

Settings

Model Information

GLM-4.7-Flash

👋 Join our Discord community.
📖 Check out the GLM-4.7 technical blog, technical report(GLM-4.5).
📍 Use GLM-4.7-Flash API services on Z.ai API Platform.
👉 One click to GLM-4.7.

Introduction

GLM-4.7-Flash is a 30B-A3B MoE model. As the strongest model in the 30B class, GLM-4.7-Flash offers a new option for lightweight deployment that balances performance and efficiency.

Performances on Benchmarks

BenchmarkGLM-4.7-FlashQwen3-30B-A3B-Thinking-2507GPT-OSS-20B
AIME 2591.685.091.7
GPQA75.273.471.5
LCB v664.066.061.0
HLE14.49.810.9
SWE-bench Verified59.222.034.0
τ²-Bench79.549.047.7
BrowseComp42.82.2928.3

Evaluation Parameters

Default Settings (Most Tasks)

  • temperature: 1.0
  • top-p: 0.95
  • max new tokens: 131072

For multi-turn agentic tasks (τ²-Bench and Terminal Bench 2), please turn on Preserved Thinking mode.

Terminal Bench, SWE Bench Verified

  • temperature: 0.7
  • top-p: 1.0
  • max new tokens: 16384

τ^2-Bench

  • Temperature: 0
  • Max new tokens: 16384

For τ^2-Bench evaluation, we added an additional prompt to the Retail and Telecom user interaction to avoid failure modes caused by users ending the interaction incorrectly. For the Airline domain, we applied the domain fixes as proposed in the Claude Opus 4.5 release report.