We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

FLUX.2 is live! High-fidelity image generation made simple.

moonshotai logo

moonshotai/

Kimi-K2.5

$0.45

in

$2.80

out

$0.225

cached

/ 1M tokens

Kimi K2.5 is an open-source, native multimodal agentic model built through continual pretraining on approximately 15 trillion mixed visual and text tokens atop Kimi-K2-Base. It seamlessly integrates vision and language understanding with advanced agentic capabilities, instant and thinking modes, as well as conversational and agentic paradigms.

Deploy Private Endpoint
Public
262,144
JSON
Function
Multimodal
moonshotai/Kimi-K2.5 cover image
moonshotai/Kimi-K2.5 cover image
Kimi-K2.5

Ask me anything

0.00s

Settings

Model Information
Kimi K2.5

Chat Homepage
Hugging Face Twitter Follow Discord
License

📰  Tech Blog

1. Model Introduction

Kimi K2.5 is an open-source, native multimodal agentic model built through continual pretraining on approximately 15 trillion mixed visual and text tokens atop Kimi-K2-Base. It seamlessly integrates vision and language understanding with advanced agentic capabilities, instant and thinking modes, as well as conversational and agentic paradigms.

Key Features

  • Native Multimodality: Pre-trained on vision–language tokens, K2.5 excels in visual knowledge, cross-modal reasoning, and agentic tool use grounded in visual inputs.
  • Coding with Vision: K2.5 generates code from visual specifications (UI designs, video workflows) and autonomously orchestrates tools for visual data processing.
  • Agent Swarm: K2.5 transitions from single-agent scaling to a self-directed, coordinated swarm-like execution scheme. It decomposes complex tasks into parallel sub-tasks executed by dynamically instantiated, domain-specific agents.

2. Model Summary

ArchitectureMixture-of-Experts (MoE)
Total Parameters1T
Activated Parameters32B
Number of Layers (Dense layer included)61
Number of Dense Layers1
Attention Hidden Dimension7168
MoE Hidden Dimension (per Expert)2048
Number of Attention Heads64
Number of Experts384
Selected Experts per Token8
Number of Shared Experts1
Vocabulary Size160K
Context Length256K
Attention MechanismMLA
Activation FunctionSwiGLU
Vision EncoderMoonViT
Parameters of Vision Encoder400M

3. Evaluation Results

BenchmarkKimi K2.5
(Thinking)
GPT-5.2
(xhigh)
Claude 4.5 Opus
(Extended Thinking)
Gemini 3 Pro
(High Thinking Level)
DeepSeek V3.2
(Thinking)
Qwen3-VL-
235B-A22B-
Thinking
Reasoning & Knowledge
HLE-Full30.134.530.837.525.1-
HLE-Full
(w/ tools)
50.245.543.245.840.8-
AIME 202596.110092.895.093.1-
HMMT 2025 (Feb)95.499.492.9*97.3*92.5-
IMO-AnswerBench81.886.378.5*83.1*78.3-
GPQA-Diamond87.692.487.091.982.4-
MMLU-Pro87.186.7*89.3*90.185.0-
Image & Video
MMMU-Pro78.579.5*74.081.0-69.3
CharXiv (RQ)77.582.167.2*81.4-66.1
MathVision84.283.077.1*86.1*-74.6
MathVista (mini)90.182.8*80.2*89.8*-85.8
ZeroBench99*3*8*-4*
ZeroBench
(w/ tools)
117*9*12*-3*
OCRBench92.380.7*86.5*90.3*-87.5
OmniDocBench 1.588.885.787.7*88.5-82.0*
InfoVQA (val)92.684*76.9*57.2*-89.5
SimpleVQA71.255.8*69.7*69.7*-56.8*
WorldVQA46.328.036.847.4-23.5
VideoMMMU86.685.984.4*87.6-80.0
MMVU80.480.8*77.377.5-71.1
MotionBench70.464.860.370.3--
VideoMME87.486.0*-88.4*-79.0
LongVideoBench79.876.5*67.2*77.7*-65.6*
LVBench75.9--73.5*-63.6
Coding
SWE-Bench Verified76.880.080.976.273.1-
SWE-Bench Pro50.755.655.4*---
SWE-Bench Multilingual73.072.077.565.070.2-
Terminal Bench 2.050.854.059.354.246.4-
PaperBench63.563.7*72.9*-47.1-
CyberGym41.3-50.639.9*17.3*-
SciCode48.752.149.556.138.9-
OJBench (cpp)57.4-54.6*68.5*54.7*-
LiveCodeBench (v6)85.0-82.2*87.4*83.3-
Long Context
Longbench v261.054.5*64.4*68.2*59.8*-
AA-LCR70.072.3*71.3*65.3*64.3*-
Agentic Search
BrowseComp60.665.837.037.851.4-
BrowseComp
(w/ctx manage)
74.957.859.267.6-
BrowseComp
(Agent Swarm)
78.4-----
WideSearch
(item-f1)
72.7-76.2*57.032.5*-
WideSearch
(item-f1 Agent Swarm)
79.0-----
DeepSearchQA77.171.3*76.1*63.2*60.9*-
FinSearchCompT2&T367.8-66.2*49.959.1*-
Seal-057.445.047.7*45.5*49.5*-
Footnotes
  1. General Testing Details
    • We report results for Kimi K2.5 and DeepSeek-V3.2 with thinking mode enabled, Claude Opus 4.5 with extended thinking mode, GPT-5.2 with xhigh reasoning effort, and Gemini 3 Pro with a high thinking level. For vision benchmarks, we additionally report results for Qwen3-VL-235B-A22B-Thinking.
    • Unless otherwise specified, all Kimi K2.5 experiments were conducted with temperature = 1.0, top-p = 0.95, and a context length of 256k tokens.
    • Benchmarks without publicly available scores were re-evaluated under the same conditions used for Kimi K2.5 and are marked with an asterisk (*).
    • We could not evaluate GPT-5.2 xhigh on all benchmarks due to service stability issues. For benchmarks that were not tested, we mark them as "-".
  2. Text and Reasoning
    • HLE, AIME 2025, HMMT 2025 (Feb), and GPQA-Diamond were evaluated with a maximum completion budget of 96k tokens.
    • Results for AIME and HMMT are averaged over 32 runs (avg@32); GPQA-Diamond over 8 runs (avg@8).
    • For HLE, we report scores on the full set (text & image). Kimi K2.5 scores 31.5 (text) and 21.3 (image) without tools, and 51.8 (text) and 39.8 (image) with tools. The DeepSeek-V3.2 score corresponds to its text-only subset (marked with †) . Hugging Face access was blocked to prevent potential data leakage. HLE with tools uses simple context management: once the context exceeds a threshold, only the latest round of tool messages is retained.
  3. Tool-Augmented / Agentic Search
    • Kimi K2.5 was equipped with search, code-interpreter, and web-browsing tools for HLE with tools and all agentic search benchmarks.
    • Except for BrowseComp (where K2.5 and DeepSeek-V3.2 used the discard-all strategy), no context management was applied, and tasks exceeding the supported context length were directly counted as failed.
    • The test system prompts emphasize deep and proactive tool use, instructing models to reason carefully, leverage tools, and verify uncertain information. Full prompts will be provided in the technical report.
    • Results for Seal-0 and WideSearch are averaged over four runs (avg@4).
  4. Vision Benchmarks
    • Max-tokens = 64k, averaged over three runs (avg@3).
    • ZeroBench (w/ tools) uses max-tokens-per-step = 24k and max-steps = 30 for multi-step reasoning.
    • MMMU-Pro follows the official protocol, preserving input order and prepending images.
    • GPT-5.2-xhigh had ~10% failure rate (no output despite 3 retries), treated as incorrect; reported scores likely underestimate true performance.
    • WorldVQA, a benchmark designed to evaluate atomic vision-centric world knowledge. Access WorldVQA at https://github.com/MoonshotAI/WorldVQA.
    • OmniDocBench Score is computed as (1 − normalized Levenshtein distance) × 100, where a higher score denotes superior accuracy.
  5. Coding Tasks
    • Terminal-Bench 2.0 scores were obtained with the default agent framework (Terminus-2) and the provided JSON parser. In our implementation, we evaluated Terminal-Bench 2.0 under non-thinking mode. This choice was made because our current context management strategy for the thinking mode is incompatible with Terminus-2.
    • For the SWE-Bench series of evaluations (including verified, multilingual, and pro), we used an internally developed evaluation framework. This framework includes a minimal set of tools—bash tool, createfile tool, insert tool, view tool, strreplace tool, and submit tool—along with tailored system prompts designed for the tasks. The highest scores were achieved under non-thinking mode.
    • The score of Claude Opus 4.5 on CyberGym is reported under the non-thinking setting.
    • All reported scores of coding tasks are averaged over 5 independent runs.
  6. Long-Context Benchmarks
    • AA-LCR: scores averaged over three runs (avg@3).
    • LongBench-V2: identical prompts and input contexts standardized to ~128k tokens.
  7. Agent Swarm
    • BrowseComp (Swarm Mode): main agent max 15 steps; sub-agents max 100 steps.
    • WideSearch (Swarm Mode): main and sub-agents max 100 steps.

4. Native INT4 Quantization

Kimi-K2.5 adopts the same native int4 quantization method as Kimi-K2-Thinking.