We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

DeepSeek Model Family

DeepSeek develops advanced foundation models optimized for computational efficiency and strong generalization across diverse tasks. The architecture incorporates recent advances in transformer-based systems, delivering robust performance in both zero-shot and fine-tuned scenarios. Models are pretrained on rigorously filtered multilingual corpora with specialized optimizations for mathematical reasoning and algorithmic tasks. The inference stack achieves competitive throughput while maintaining low latency, making it suitable for production deployment. Researchers and engineers can leverage these models for tasks ranging from natural language processing to complex analytical problem-solving.

Featured Model: deepseek-ai/DeepSeek-R1-0528-Turbo

DeepSeek-R1-0528 is a version upgrade of the DeepSeek R1 model. This upgrade has significantly improved the depth of reasoning and inference capabilities of the model by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training. DeepSeek-R1-0528 has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic.

Price per 1M input tokens

$1.00


Price per 1M output tokens

$3.00


Release Date

06/16/2025


Context Size

32,768


Quantization

fp4


License Type

License


# Assume openai>=1.0.0
from openai import OpenAI

# Create an OpenAI client with your deepinfra token and endpoint
openai = OpenAI(
    api_key="$DEEPINFRA_TOKEN",
    base_url="https://api.deepinfra.com/v1/openai",
)

chat_completion = openai.chat.completions.create(
    model="deepseek-ai/DeepSeek-R1-0528-Turbo",
    messages=[{"role": "user", "content": "Hello"}],
)

print(chat_completion.choices[0].message.content)
print(chat_completion.usage.prompt_tokens, chat_completion.usage.completion_tokens)

# Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?
# 11 25
copy

Price per 1M input tokens

$1.00


Price per 1M output tokens

$3.00


Release Date

06/3/2025


Context Size

32,768


Quantization

fp4


# Assume openai>=1.0.0
from openai import OpenAI

# Create an OpenAI client with your deepinfra token and endpoint
openai = OpenAI(
    api_key="$DEEPINFRA_TOKEN",
    base_url="https://api.deepinfra.com/v1/openai",
)

chat_completion = openai.chat.completions.create(
    model="deepseek-ai/DeepSeek-V3-0324-Turbo",
    messages=[{"role": "user", "content": "Hello"}],
)

print(chat_completion.choices[0].message.content)
print(chat_completion.usage.prompt_tokens, chat_completion.usage.completion_tokens)

# Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?
# 11 25
copy

Available DeepSeek Models

DeepSeek's models are a suite of advanced AI systems that prioritize efficiency, scalability, and real-world applicability.

ModelContext$ per 1M input tokens$ per 1M output tokens
Actions
DeepSeek-R1160k$0.45$2.15
DeepSeek-R1-0528160k$0.50$2.15
DeepSeek-R1-Turbo32k$1.00$3.00
DeepSeek-R1-0528-Turbo32k$1.00$3.00
DeepSeek-V3-0324160k$0.28$0.88
DeepSeek-V3160k$0.38$0.89
DeepSeek-V3-0324-Turbo32k$1.00$3.00
DeepSeek-Prover-V2-671B160k$0.50$2.18
DeepSeek-R1-Distill-Llama-70B128k$0.10$0.40
DeepSeek-R1-Distill-Qwen-32B128k$0.075$0.15

FAQ

What is DeepSeek?

DeepSeek is a family of high-performance, open-source language models developed by DeepSeek AI. These models, including DeepSeek-R1 and DeepSeek-V3, are optimized for reasoning, coding, and multi-modal tasks. DeepInfra hosts these models with scalable, low-latency inference infrastructure and OpenAI-compatible APIs—so you can use them immediately without managing your own GPUs.

How do DeepSeek models compare to OpenAI or Claude models?

DeepSeek-R1 achieves performance comparable to OpenAI’s GPT-4 and Claude 3 on math, reasoning, and coding tasks. DeepSeek-V3, a 671B-parameter MoE model, rivals top-tier closed-source LLMs while remaining fully open-source. DeepInfra provides low-latency access and predictable pricing that’s often more affordable.

Are the DeepSeek models open source?

Yes. All DeepSeek models are MIT-licensed, with open weights and training details publicly released. This ensures transparency, customizability, and legal flexibility for commercial use.

How do I integrate DeepSeek models into my application?

You can integrate DeepSeek models seamlessly using DeepInfra’s OpenAI-compatible API. Just replace your existing base URL with DeepInfra’s endpoint and use your DeepInfra API key—no infrastructure setup required. DeepInfra also supports integration through libraries like openai, litellm, and other SDKs, making it easy to switch or scale your workloads instantly.

What are the pricing details for using DeepSeek models on DeepInfra?

Pricing is usage-based:
  • Input Tokens: between $0.075 and $1.00 per million
  • Output Tokens: between $0.15 and $3.00 per million
Prices vary slightly by model. There are no upfront fees, and you only pay for what you use.

How do I get started using DeepSeek on DeepInfra?

Sign in with GitHub at deepinfra.com
  • Get your API key
  • Test models directly from the browser, cURL, or SDKs
  • Review pricing on your usage dashboard
Within minutes, you can deploy apps using DeepSeek models—without any infrastructure setup.