mistralai/Mistral-Small-24B-Instruct-2501 cover image
featured

mistralai/Mistral-Small-24B-Instruct-2501

Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware.

Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware.

Public
$0.07/$0.14 in/out Mtoken
fp8
32,768
JSON
ProjectPaperLicense
mistralai/Mistral-Small-24B-Instruct-2501 cover image

Mistral Small 3

Ask me anything

0.00s

Model Card for Mistral-Small-24B-Instruct-2501

Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models!
This model is an instruction-fine-tuned version of the base model: Mistral-Small-24B-Base-2501.

Mistral Small can be deployed locally and is exceptionally "knowledge-dense", fitting in a single RTX 4090 or a 32GB RAM MacBook once quantized.
Perfect for:

  • Fast response conversational agents.
  • Low latency function calling.
  • Subject matter experts via fine-tuning.
  • Local inference for hobbyists and organizations handling sensitive data.

For enterprises that need specialized capabilities (increased context, particular modalities, domain specific knowledge, etc.), we will be releasing commercial models beyond what Mistral AI contributes to the community.

This release demonstrates our commitment to open source, serving as a strong base model.

Learn more about Mistral Small in our blog post.

Model developper: Mistral AI Team

Key Features

  • Multilingual: Supports dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, and Polish.
  • Agent-Centric: Offers best-in-class agentic capabilities with native function calling and JSON outputting.
  • Advanced Reasoning: State-of-the-art conversational and reasoning capabilities.
  • Apache 2.0 License: Open license allowing usage and modification for both commercial and non-commercial purposes.
  • Context Window: A 32k context window.
  • System Prompt: Maintains strong adherence and support for system prompts.
  • Tokenizer: Utilizes a Tekken tokenizer with a 131k vocabulary size.

Benchmark results

Human evaluated benchmarks

CategoryGemma-2-27BQwen-2.5-32BLlama-3.3-70BGpt4o-mini
Mistral is better0.5360.4960.1920.200
Mistral is slightly better0.1960.1840.1640.204
Ties0.0520.0600.2360.160
Other is slightly better0.0600.0880.1120.124
Other is better0.1560.1720.2960.312

Note:

  • We conducted side by side evaluations with an external third-party vendor, on a set of over 1k proprietary coding and generalist prompts.
  • Evaluators were tasked with selecting their preferred model response from anonymized generations produced by Mistral Small 3 vs another model.
  • We are aware that in some cases the benchmarks on human judgement starkly differ from publicly available benchmarks, but have taken extra caution in verifying a fair evaluation. We are confident that the above benchmarks are valid.

Publicly accesible benchmarks

Reasoning & Knowledge

Evaluationmistral-small-24B-instruct-2501gemma-2b-27bllama-3.3-70bqwen2.5-32bgpt-4o-mini-2024-07-18
mmlu_pro_5shot_cot_instruct0.6630.5360.6660.6830.617
gpqa_main_cot_5shot_instruct0.4530.3440.5310.4040.377

Math & Coding

Evaluationmistral-small-24B-instruct-2501gemma-2b-27bllama-3.3-70bqwen2.5-32bgpt-4o-mini-2024-07-18
humaneval_instruct_pass@10.8480.7320.8540.9090.890
math_instruct0.7060.5350.7430.8190.761

Instruction following

Evaluationmistral-small-24B-instruct-2501gemma-2b-27bllama-3.3-70bqwen2.5-32bgpt-4o-mini-2024-07-18
mtbench_dev8.357.867.968.268.33
wildbench52.2748.2150.0452.7356.13
arena_hard0.8730.7880.8400.8600.897
ifeval0.8290.80650.88350.84010.8499

Note:

  • Performance accuracy on all benchmarks were obtained through the same internal evaluation pipeline - as such, numbers may vary slightly from previously reported performance (Qwen2.5-32B-Instruct, Llama-3.3-70B-Instruct, Gemma-2-27B-IT).
  • Judge based evals such as Wildbench, Arena hard and MTBench were based on gpt-4o-2024-05-13.