We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

🚀 New models by Bria.ai, generate and edit images at scale 🚀

Qwen logo

Qwen/

Qwen3-235B-A22B-Instruct-2507

$0.09

in

$0.57

out

Qwen3-235B-A22B-Instruct-2507 is the updated version of the Qwen3-235B-A22B non-thinking mode, featuring Significant improvements in general capabilities, including instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage.

Deploy Private Endpoint
Public
fp8
262,144
JSON
Function
Qwen
Qwen/Qwen3-235B-A22B-Instruct-2507 cover image
Qwen/Qwen3-235B-A22B-Instruct-2507 cover image
Qwen3-235B-A22B-Instruct-2507

Ask me anything

0.00s

Settings

Model Information

Highlights

We introduce the updated version of the Qwen3-235B-A22B non-thinking mode, named Qwen3-235B-A22B-Instruct-2507, featuring the following key enhancements:

  • Significant improvements in general capabilities, including instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage.
  • Substantial gains in long-tail knowledge coverage across multiple languages.
  • Markedly better alignment with user preferences in subjective and open-ended tasks, enabling more helpful responses and higher-quality text generation.
  • Enhanced capabilities in 256K long-context understanding.

image/jpeg

Model Overview

Qwen3-235B-A22B-Instruct-2507 has the following features:

  • Type: Causal Language Models
  • Training Stage: Pretraining & Post-training
  • Number of Parameters: 235B in total and 22B activated
  • Number of Paramaters (Non-Embedding): 234B
  • Number of Layers: 94
  • Number of Attention Heads (GQA): 64 for Q and 4 for KV
  • Number of Experts: 128
  • Number of Activated Experts: 8
  • Context Length: 262,144 natively.

NOTE: This model supports only non-thinking mode and does not generate **\<think>****\</think>** blocks in its output. Meanwhile, specifying enable_thinking=False is no longer required.

For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.

Performance

Deepseek-V3-0324GPT-4o-0327Claude Opus 4 Non-thinkingKimi K2Qwen3-235B-A22B Non-thinkingQwen3-235B-A22B-Instruct-2507
Knowledge
MMLU-Pro81.279.886.681.175.283.0
MMLU-Redux90.491.394.292.789.293.1
GPQA68.466.974.975.162.977.5
SuperGPQA57.351.056.557.248.262.6
SimpleQA27.240.322.831.012.254.3
CSimpleQA71.160.268.074.560.884.3
Reasoning
AIME2546.626.733.949.524.770.3
HMMT2527.57.915.938.810.055.4
ARC-AGI9.08.830.313.34.341.8
ZebraLogic83.452.6-89.037.795.0
LiveBench 2024112566.963.774.676.462.575.4
Coding
LiveCodeBench v6 (25.02-25.05)45.235.844.648.932.951.8
MultiPL-E82.282.788.585.779.387.9
Aider-Polyglot55.145.370.759.059.657.3
Alignment
IFEval82.383.987.489.883.288.7
Arena-Hard v2*45.661.951.566.152.079.2
Creative Writing v381.684.983.888.180.487.5
WritingBench74.575.579.286.277.085.2
Agent
BFCL-v364.766.560.165.268.070.9
TAU-Retail49.660.3#81.470.765.271.3
TAU-Airline32.042.8#59.653.532.044.0
Multilingualism
MultiIF66.570.4-76.270.277.5
MMLU-ProX75.876.2-74.573.279.4
INCLUDE80.182.1-76.975.679.5
PolyMATH32.225.530.044.827.050.2

*: For reproducibility, we report the win rates evaluated by GPT-4.1.

#: Results were generated using GPT-4o-20241120, as access to the native function calling API of GPT-4o-0327 was unavailable.