📰 Tech Blog
1. Model Introduction
Kimi K2.5 is an open-source, native multimodal agentic model built through continual pretraining on approximately 15 trillion mixed visual and text tokens atop Kimi-K2-Base. It seamlessly integrates vision and language understanding with advanced agentic capabilities, instant and thinking modes, as well as conversational and agentic paradigms.
Key Features
- Native Multimodality: Pre-trained on vision–language tokens, K2.5 excels in visual knowledge, cross-modal reasoning, and agentic tool use grounded in visual inputs.
- Coding with Vision: K2.5 generates code from visual specifications (UI designs, video workflows) and autonomously orchestrates tools for visual data processing.
- Agent Swarm: K2.5 transitions from single-agent scaling to a self-directed, coordinated swarm-like execution scheme. It decomposes complex tasks into parallel sub-tasks executed by dynamically instantiated, domain-specific agents.
2. Model Summary
| |
|---|
| Architecture | Mixture-of-Experts (MoE) |
| Total Parameters | 1T |
| Activated Parameters | 32B |
| Number of Layers (Dense layer included) | 61 |
| Number of Dense Layers | 1 |
| Attention Hidden Dimension | 7168 |
| MoE Hidden Dimension (per Expert) | 2048 |
| Number of Attention Heads | 64 |
| Number of Experts | 384 |
| Selected Experts per Token | 8 |
| Number of Shared Experts | 1 |
| Vocabulary Size | 160K |
| Context Length | 256K |
| Attention Mechanism | MLA |
| Activation Function | SwiGLU |
| Vision Encoder | MoonViT |
| Parameters of Vision Encoder | 400M |
3. Evaluation Results
| Benchmark | Kimi K2.5 (Thinking) | GPT-5.2 (xhigh) | Claude 4.5 Opus (Extended Thinking) | Gemini 3 Pro (High Thinking Level) | DeepSeek V3.2 (Thinking) | Qwen3-VL- 235B-A22B- Thinking |
|---|
| Reasoning & Knowledge |
| HLE-Full | 30.1 | 34.5 | 30.8 | 37.5 | 25.1† | - |
HLE-Full (w/ tools) | 50.2 | 45.5 | 43.2 | 45.8 | 40.8† | - |
| AIME 2025 | 96.1 | 100 | 92.8 | 95.0 | 93.1 | - |
| HMMT 2025 (Feb) | 95.4 | 99.4 | 92.9* | 97.3* | 92.5 | - |
| IMO-AnswerBench | 81.8 | 86.3 | 78.5* | 83.1* | 78.3 | - |
| GPQA-Diamond | 87.6 | 92.4 | 87.0 | 91.9 | 82.4 | - |
| MMLU-Pro | 87.1 | 86.7* | 89.3* | 90.1 | 85.0 | - |
| Image & Video |
| MMMU-Pro | 78.5 | 79.5* | 74.0 | 81.0 | - | 69.3 |
| CharXiv (RQ) | 77.5 | 82.1 | 67.2* | 81.4 | - | 66.1 |
| MathVision | 84.2 | 83.0 | 77.1* | 86.1* | - | 74.6 |
| MathVista (mini) | 90.1 | 82.8* | 80.2* | 89.8* | - | 85.8 |
| ZeroBench | 9 | 9* | 3* | 8* | - | 4* |
ZeroBench (w/ tools) | 11 | 7* | 9* | 12* | - | 3* |
| OCRBench | 92.3 | 80.7* | 86.5* | 90.3* | - | 87.5 |
| OmniDocBench 1.5 | 88.8 | 85.7 | 87.7* | 88.5 | - | 82.0* |
| InfoVQA (val) | 92.6 | 84* | 76.9* | 57.2* | - | 89.5 |
| SimpleVQA | 71.2 | 55.8* | 69.7* | 69.7* | - | 56.8* |
| WorldVQA | 46.3 | 28.0 | 36.8 | 47.4 | - | 23.5 |
| VideoMMMU | 86.6 | 85.9 | 84.4* | 87.6 | - | 80.0 |
| MMVU | 80.4 | 80.8* | 77.3 | 77.5 | - | 71.1 |
| MotionBench | 70.4 | 64.8 | 60.3 | 70.3 | - | - |
| VideoMME | 87.4 | 86.0* | - | 88.4* | - | 79.0 |
| LongVideoBench | 79.8 | 76.5* | 67.2* | 77.7* | - | 65.6* |
| LVBench | 75.9 | - | - | 73.5* | - | 63.6 |
| Coding |
| SWE-Bench Verified | 76.8 | 80.0 | 80.9 | 76.2 | 73.1 | - |
| SWE-Bench Pro | 50.7 | 55.6 | 55.4* | - | - | - |
| SWE-Bench Multilingual | 73.0 | 72.0 | 77.5 | 65.0 | 70.2 | - |
| Terminal Bench 2.0 | 50.8 | 54.0 | 59.3 | 54.2 | 46.4 | - |
| PaperBench | 63.5 | 63.7* | 72.9* | - | 47.1 | - |
| CyberGym | 41.3 | - | 50.6 | 39.9* | 17.3* | - |
| SciCode | 48.7 | 52.1 | 49.5 | 56.1 | 38.9 | - |
| OJBench (cpp) | 57.4 | - | 54.6* | 68.5* | 54.7* | - |
| LiveCodeBench (v6) | 85.0 | - | 82.2* | 87.4* | 83.3 | - |
| Long Context |
| Longbench v2 | 61.0 | 54.5* | 64.4* | 68.2* | 59.8* | - |
| AA-LCR | 70.0 | 72.3* | 71.3* | 65.3* | 64.3* | - |
| Agentic Search |
| BrowseComp | 60.6 | 65.8 | 37.0 | 37.8 | 51.4 | - |
BrowseComp (w/ctx manage) | 74.9 | 57.8 | 59.2 | 67.6 | - |
BrowseComp (Agent Swarm) | 78.4 | - | - | - | - | - |
WideSearch (item-f1) | 72.7 | - | 76.2* | 57.0 | 32.5* | - |
WideSearch (item-f1 Agent Swarm) | 79.0 | - | - | - | - | - |
| DeepSearchQA | 77.1 | 71.3* | 76.1* | 63.2* | 60.9* | - |
| FinSearchCompT2&T3 | 67.8 | - | 66.2* | 49.9 | 59.1* | - |
| Seal-0 | 57.4 | 45.0 | 47.7* | 45.5* | 49.5* | - |
Footnotes
- General Testing Details
- We report results for Kimi K2.5 and DeepSeek-V3.2 with thinking mode enabled, Claude Opus 4.5 with extended thinking mode, GPT-5.2 with xhigh reasoning effort, and Gemini 3 Pro with a high thinking level. For vision benchmarks, we additionally report results for Qwen3-VL-235B-A22B-Thinking.
- Unless otherwise specified, all Kimi K2.5 experiments were conducted with temperature = 1.0, top-p = 0.95, and a context length of 256k tokens.
- Benchmarks without publicly available scores were re-evaluated under the same conditions used for Kimi K2.5 and are marked with an asterisk (*).
- We could not evaluate GPT-5.2 xhigh on all benchmarks due to service stability issues. For benchmarks that were not tested, we mark them as "-".
- Text and Reasoning
- HLE, AIME 2025, HMMT 2025 (Feb), and GPQA-Diamond were evaluated with a maximum completion budget of 96k tokens.
- Results for AIME and HMMT are averaged over 32 runs (avg@32); GPQA-Diamond over 8 runs (avg@8).
- For HLE, we report scores on the full set (text & image). Kimi K2.5 scores 31.5 (text) and 21.3 (image) without tools, and 51.8 (text) and 39.8 (image) with tools. The DeepSeek-V3.2 score corresponds to its text-only subset (marked with †) . Hugging Face access was blocked to prevent potential data leakage. HLE with tools uses simple context management: once the context exceeds a threshold, only the latest round of tool messages is retained.
- Tool-Augmented / Agentic Search
- Kimi K2.5 was equipped with search, code-interpreter, and web-browsing tools for HLE with tools and all agentic search benchmarks.
- Except for BrowseComp (where K2.5 and DeepSeek-V3.2 used the discard-all strategy), no context management was applied, and tasks exceeding the supported context length were directly counted as failed.
- The test system prompts emphasize deep and proactive tool use, instructing models to reason carefully, leverage tools, and verify uncertain information. Full prompts will be provided in the technical report.
- Results for Seal-0 and WideSearch are averaged over four runs (avg@4).
- Vision Benchmarks
- Max-tokens = 64k, averaged over three runs (avg@3).
- ZeroBench (w/ tools) uses max-tokens-per-step = 24k and max-steps = 30 for multi-step reasoning.
- MMMU-Pro follows the official protocol, preserving input order and prepending images.
- GPT-5.2-xhigh had ~10% failure rate (no output despite 3 retries), treated as incorrect; reported scores likely underestimate true performance.
- WorldVQA, a benchmark designed to evaluate atomic vision-centric world knowledge. Access WorldVQA at https://github.com/MoonshotAI/WorldVQA.
- OmniDocBench Score is computed as (1 − normalized Levenshtein distance) × 100, where a higher score denotes superior accuracy.
- Coding Tasks
- Terminal-Bench 2.0 scores were obtained with the default agent framework (Terminus-2) and the provided JSON parser. In our implementation, we evaluated Terminal-Bench 2.0 under non-thinking mode. This choice was made because our current context management strategy for the thinking mode is incompatible with Terminus-2.
- For the SWE-Bench series of evaluations (including verified, multilingual, and pro), we used an internally developed evaluation framework. This framework includes a minimal set of tools—bash tool, createfile tool, insert tool, view tool, strreplace tool, and submit tool—along with tailored system prompts designed for the tasks. The highest scores were achieved under non-thinking mode.
- The score of Claude Opus 4.5 on CyberGym is reported under the non-thinking setting.
- All reported scores of coding tasks are averaged over 5 independent runs.
- Long-Context Benchmarks
- AA-LCR: scores averaged over three runs (avg@3).
- LongBench-V2: identical prompts and input contexts standardized to ~128k tokens.
- Agent Swarm
- BrowseComp (Swarm Mode): main agent max 15 steps; sub-agents max 100 steps.
- WideSearch (Swarm Mode): main and sub-agents max 100 steps.
4. Native INT4 Quantization
Kimi-K2.5 adopts the same native int4 quantization method as Kimi-K2-Thinking.