mistralai/Mistral-Small-24B-Instruct-2501 cover image
featured

mistralai/Mistral-Small-24B-Instruct-2501

Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware.

Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware.

Public
$0.07/$0.14 in/out Mtoken
fp8
32,768
JSON
ProjectPaperLicense
demoapi

010d42b0ae15e140bf9c5e02ca88273b9c257a89

2025-01-31T20:14:03+00:00