Qwen/Qwen2.5-72B-Instruct cover image
featured

Qwen/Qwen2.5-72B-Instruct

Qwen2.5 is a model pretrained on a large-scale dataset of up to 18 trillion tokens, offering significant improvements in knowledge, coding, mathematics, and instruction following compared to its predecessor Qwen2. The model also features enhanced capabilities in generating long texts, understanding structured data, and generating structured outputs, while supporting multilingual capabilities for over 29 languages.

Qwen2.5 is a model pretrained on a large-scale dataset of up to 18 trillion tokens, offering significant improvements in knowledge, coding, mathematics, and instruction following compared to its predecessor Qwen2. The model also features enhanced capabilities in generating long texts, understanding structured data, and generating structured outputs, while supporting multilingual capabilities for over 29 languages.

Public
$0.35/$0.40 in/out Mtoken
bfloat16
32k
Function
ProjectLicense
Qwen/Qwen2.5-72B-Instruct cover image

Qwen2.5-72B-Instruct

Ask me anything

0.00s

Qwen2.5-72B-Instruct

Introduction

Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:

  • Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains.
  • Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots.
  • Long-context Support up to 128K tokens and can generate up to 8K tokens.
  • Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

This repo contains the instruction-tuned 72B Qwen2.5 model, which has the following features:

  • Type: Causal Language Models
  • Training Stage: Pretraining & Post-training
  • Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
  • Number of Parameters: 72.7B
  • Number of Paramaters (Non-Embedding): 70.0B
  • Number of Layers: 80
  • Number of Attention Heads (GQA): 64 for Q and 8 for KV
  • Context Length: Full 131,072 tokens and generation 8192 tokens
    • Please refer to this section for detailed instructions on how to deploy Qwen2.5 for handling long texts.

For more details, please refer to our blog, GitHub, and Documentation.

Evaluation & Performance

Detailed evaluation results are reported in this 📑 blog.