deepseek-ai/DeepSeek-R1-Distill-Llama-70B cover image
featured

deepseek-ai/DeepSeek-R1-Distill-Llama-70B

DeepSeek-R1-Distill-Llama-70B is a highly efficient language model that leverages knowledge distillation to achieve state-of-the-art performance. This model distills the reasoning patterns of larger models into a smaller, more agile architecture, resulting in exceptional results on benchmarks like AIME 2024, MATH-500, and LiveCodeBench. With 70 billion parameters, DeepSeek-R1-Distill-Llama-70B offers a unique balance of accuracy and efficiency, making it an ideal choice for a wide range of natural language processing tasks.

DeepSeek-R1-Distill-Llama-70B is a highly efficient language model that leverages knowledge distillation to achieve state-of-the-art performance. This model distills the reasoning patterns of larger models into a smaller, more agile architecture, resulting in exceptional results on benchmarks like AIME 2024, MATH-500, and LiveCodeBench. With 70 billion parameters, DeepSeek-R1-Distill-Llama-70B offers a unique balance of accuracy and efficiency, making it an ideal choice for a wide range of natural language processing tasks.

Public
$0.23/$0.69 in/out Mtoken
bfloat16
131,072
ProjectPaper
deepseek-ai/DeepSeek-R1-Distill-Llama-70B cover image

DeepSeek-R1-Distill-Llama-70B

Ask me anything

0.00s

1. Introduction

Distillation: Smaller Models Can Be Powerful Too

  • We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
  • Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.

2. Model Downloads

DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1. We slightly change their configs and tokenizers. Please use our setting to run these models.

3. Evaluation Results

Distilled Model Evaluation

ModelAIME 2024 pass@1AIME 2024 cons@64MATH-500 pass@1GPQA Diamond pass@1LiveCodeBench pass@1CodeForces rating
GPT-4o-05139.313.474.649.932.9759
Claude-3.5-Sonnet-102216.026.778.365.038.9717
o1-mini63.680.090.060.053.81820
QwQ-32B-Preview44.060.090.654.541.91316
DeepSeek-R1-Distill-Qwen-1.5B28.952.783.933.816.9954
DeepSeek-R1-Distill-Qwen-7B55.583.392.849.137.61189
DeepSeek-R1-Distill-Qwen-14B69.780.093.959.153.11481
DeepSeek-R1-Distill-Qwen-32B72.683.394.362.157.21691
DeepSeek-R1-Distill-Llama-8B50.480.089.149.039.61205
DeepSeek-R1-Distill-Llama-70B70.086.794.565.257.51633

4. License

This code repository and the model weights are licensed under the MIT License. DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:

  • DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from Qwen-2.5 series, which are originally licensed under Apache 2.0 License, and now finetuned with 800k samples curated with DeepSeek-R1.
  • DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under llama3.1 license.
  • DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under llama3.3 license.