We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

🚀 New models by Bria.ai, generate and edit images at scale 🚀

MiniMaxAI/

MiniMax-M2

$0.27

in

$1.15

out

MiniMax-M2 is a Mini model built for Max coding & agentic workflows with just 10 billion activated parameters

Deploy Private Endpoint
Public
fp8
262,144
JSON
Function
Project
MiniMaxAI/MiniMax-M2 cover image
MiniMaxAI/MiniMax-M2 cover image
MiniMax-M2

Ask me anything

0.00s

Settings

Model Information

Meet MiniMax-M2

MiniMax-M2 is a Mini model built for Max coding & agentic workflows.

MiniMax-M2 redefines efficiency for agents. It's a compact, fast, and cost-effective MoE model (230 billion total parameters with 10 billion active parameters) built for elite performance in coding and agentic tasks, all while maintaining powerful general intelligence. With just 10 billion activated parameters, MiniMax-M2 provides the sophisticated, end-to-end tool use performance expected from today's leading models, but in a streamlined form factor that makes deployment and scaling easier than ever.


Highlights

Superior Intelligence. According to benchmarks from Artificial Analysis, MiniMax-M2 demonstrates highly competitive general intelligence across mathematics, science, instruction following, coding, and agentic tool use. Its composite score ranks #1 among open-source models globally.

Advanced Coding. Engineered for end-to-end developer workflows, MiniMax-M2 excels at multi-file edits, coding-run-fix loops, and test-validated repairs. Strong performance on Terminal-Bench and (Multi-)SWE-Bench–style tasks demonstrates practical effectiveness in terminals, IDEs, and CI across languages.

Agent Performance. MiniMax-M2 plans and executes complex, long-horizon toolchains across shell, browser, retrieval, and code runners. In BrowseComp-style evaluations, it consistently locates hard-to-surface sources, maintains evidence traceable, and gracefully recovers from flaky steps.

Efficient Design. With 10 billion activated parameters (230 billion in total), MiniMax-M2 delivers lower latency, lower cost, and higher throughput for interactive agents and batched sampling—perfectly aligned with the shift toward highly deployable models that still shine on coding and agentic tasks.


Coding & Agentic Benchmarks

These comprehensive evaluations test real-world end-to-end coding and agentic tool use: editing real repos, executing commands, browsing the web, and delivering functional solutions. Performance on this suite correlates with day-to-day developer experience in terminals, IDEs, and CI.

BenchmarkMiniMax-M2Claude Sonnet 4Claude Sonnet 4.5Gemini 2.5 ProGPT-5 (thinking)GLM-4.6Kimi K2 0905DeepSeek-V3.2
SWE-bench Verified69.472.7 *77.2 *63.8 *74.9 *68 *69.2 *67.8 *
Multi-SWE-Bench36.235.7 *44.3//3033.530.6
SWE-bench Multilingual56.556.9 *68//53.855.9 *57.9 *
Terminal-Bench46.336.4 *50 *25.3 *43.8 *40.5 *44.5 *37.7 *
ArtifactsBench66.857.3*61.557.7*73*59.854.255.8
BrowseComp4412.219.69.954.9*45.1*14.140.1*
BrowseComp-zh48.529.140.832.26549.528.847.9*
GAIA (text only)75.768.371.260.276.471.960.263.5
xbench-DeepSearch7264.6665677.8706171
HLE (w/ tools)31.820.324.528.4 *35.2 *30.4 *26.9 *27.2 *
τ²-Bench77.265.5*84.7*59.280.1*75.9*70.366.7
FinSearchComp-global65.54260.842.6*63.9*29.229.5*26.2
AgentCompany36374139.3*/353034

Notes: Data points marked with an asterisk (*) are taken directly from the model's official tech report or blog. All other metrics were obtained using the evaluation methods described below.

  • SWE-bench Verified: We use the same scaffold as R2E-Gym (Jain et al. 2025) on top of OpenHands to test with agents on SWE tasks. All scores are validated on our internal infrastructure with 128k context length, 100 max steps, and no test-time scaling. All git-related content is removed to ensure agent sees only the code at the issue point.
  • Multi-SWE-Bench & SWE-bench Multilingual: All scores are averaged across 8 runs using the claude-code CLI (300 max steps) as the evaluation scaffold.
  • Terminal-Bench: All scores are evaluated with the official claude-code from the original Terminal-Bench repository(commit 94bf692), averaged over 8 runs to report the mean pass rate.
  • ArtifactsBench: All Scores are computed by averaging three runs with the official implementation of ArtifactsBench, using the stable Gemini-2.5-Pro as the judge model.
  • BrowseComp & BrowseComp-zh & GAIA (text only) & xbench-DeepSearch: All scores reported use the same agent framework as WebExplorer (Liu et al. 2025), with minor tools description adjustment. We use the 103-sample text-only GAIA validation subset following WebExplorer (Liu et al. 2025).
  • HLE (w/ tools): All reported scores are obtained using search tools and a Python tool. The search tools employ the same agent framework as WebExplorer (Liu et al. 2025), and the Python tool runs in a Jupyter environment. We use the text-only HLE subset.
  • τ²-Bench: All scores reported use "extended thinking with tool use", and employ GPT-4.1 as the user simulator.
  • FinSearchComp-global: Official results are reported for GPT-5-Thinking, Gemini 2.5 Pro, and Kimi-K2. Other models are evaluated using the open-source FinSearchComp (Hu et al. 2025) framework using both search and Python tools, launched simultaneously for consistency.
  • AgentCompany: All scores reported use OpenHands 0.42 agent framework.

Intelligence Benchmarks

We align with Artificial Analysis, which aggregates challenging benchmarks using a consistent methodology to reflect a model’s broader intelligence profile across math, science, instruction following, coding, and agentic tool use.

Metric (AA)MiniMax-M2Claude Sonnet 4Claude Sonnet 4.5Gemini 2.5 ProGPT-5 (thinking)GLM-4.6Kimi K2 0905DeepSeek-V3.2
AIME257874888894865788
MMLU-Pro8284888687838285
GPQA-Diamond7878838485787780
HLE (w/o tools)12.59.617.321.126.513.36.313.8
LiveCodeBench (LCB)8366718085706179
SciCode3640454343383138
IFBench7255574973434254
AA-LCR6165666676545269
τ²-Bench-Telecom8765785485717334
Terminal-Bench-Hard2430332531232329
AA Intelligence6157636069565057

AA: All scores of MiniMax-M2 aligned with Artificial Analysis Intelligence Benchmarking Methodology (https://artificialanalysis.ai/methodology/intelligence-benchmarking). All scores of other models reported from https://artificialanalysis.ai/.


Why activation size matters

By maintaining activations around 10B , the plan → act → verify loop in the agentic workflow is streamlined, improving responsiveness and reducing compute overhead:

  • Faster feedback cycles in compile-run-test and browse-retrieve-cite chains.

  • More concurrent runs on the same budget for regression suites and multi-seed explorations.

  • Simpler capacity planning with smaller per-request memory and steadier tail latency.

In short: 10B activations = responsive agent loops + better unit economics.

At a glance

If you need frontier-style coding and agents without frontier-scale costs, MiniMax-M2 hits the sweet spot: fast inference speeds, robust tool-use capabilities, and a deployment-friendly footprint.

We look forward to your feedback and to collaborating with developers and researchers to bring the future of intelligent collaboration one step closer.