We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

🚀 New models by Bria.ai, generate and edit images at scale 🚀

DeepSeek Model Family

DeepSeek develops advanced foundation models optimized for computational efficiency and strong generalization across diverse tasks. The architecture incorporates recent advances in transformer-based systems, delivering robust performance in both zero-shot and fine-tuned scenarios. Models are pretrained on rigorously filtered multilingual corpora with specialized optimizations for mathematical reasoning and algorithmic tasks. The inference stack achieves competitive throughput while maintaining low latency, making it suitable for production deployment. Researchers and engineers can leverage these models for tasks ranging from natural language processing to complex analytical problem-solving.

Featured Model: deepseek-ai/DeepSeek-V3.2-Exp

DeepSeek-V3.2-Exp is an intermediate step toward the next-generation architecture of the DeepSeek models by introducing DeepSeek Sparse Attention—a sparse attention mechanism designed to explore and validate optimizations for training and inference efficiency in long-context scenarios.

Price per 1M input tokens

$0.27


Price per 1M output tokens

$0.40


Release Date

09/29/2025


Context Size

163,840


Quantization

fp4


# Assume openai>=1.0.0
from openai import OpenAI

# Create an OpenAI client with your deepinfra token and endpoint
openai = OpenAI(
    api_key="$DEEPINFRA_TOKEN",
    base_url="https://api.deepinfra.com/v1/openai",
)

chat_completion = openai.chat.completions.create(
    model="deepseek-ai/DeepSeek-V3.2-Exp",
    messages=[{"role": "user", "content": "Hello"}],
)

print(chat_completion.choices[0].message.content)
print(chat_completion.usage.prompt_tokens, chat_completion.usage.completion_tokens)

# Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?
# 11 25
copy

Featured Model: deepseek-ai/DeepSeek-V3.1-Terminus

DeepSeek-V3.1 Terminus is an update to DeepSeek V3.1 that maintains the model's original capabilities while addressing issues reported by users, including language consistency and agent capabilities, further optimizing the model's performance in coding and search agents. It is a large hybrid reasoning model (671B parameters, 37B active) that supports both thinking and non-thinking modes. It extends the DeepSeek-V3 base with a two-phase long-context training process. Users can control the reasoning behaviour with the reasoning enabled boolean. Learn more in our docs The model improves tool use, code generation, and reasoning efficiency, achieving performance comparable to DeepSeek-R1 on difficult benchmarks while responding more quickly. It supports structured tool calling, code agents, and search agents, making it suitable for research, coding, and agentic workflows.

Price per 1M input tokens

$0.27


Price per 1M cached input tokens

$0.216


Price per 1M output tokens

$1.00


Release Date

09/22/2025


Context Size

163,840


Quantization

fp4


# Assume openai>=1.0.0
from openai import OpenAI

# Create an OpenAI client with your deepinfra token and endpoint
openai = OpenAI(
    api_key="$DEEPINFRA_TOKEN",
    base_url="https://api.deepinfra.com/v1/openai",
)

chat_completion = openai.chat.completions.create(
    model="deepseek-ai/DeepSeek-V3.1-Terminus",
    messages=[{"role": "user", "content": "Hello"}],
)

print(chat_completion.choices[0].message.content)
print(chat_completion.usage.prompt_tokens, chat_completion.usage.completion_tokens)

# Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?
# 11 25
copy

Available DeepSeek Models

DeepSeek's models are a suite of advanced AI systems that prioritize efficiency, scalability, and real-world applicability.

ModelContext$ per 1M input tokens$ per 1M output tokens
Actions
DeepSeek-V3.2-Exp160k$0.27$0.40
DeepSeek-V3.1-Terminus160k$0.27 / $0.216 cached$1.00
DeepSeek-V3.1160k$0.27 / $0.216 cached$1.00
DeepSeek-V3-0324160k$0.25$0.88
DeepSeek-V3160k$0.38$0.89
DeepSeek-R1160k$0.70$2.40
DeepSeek-R1-0528160k$0.50 / $0.40 cached$2.15
DeepSeek-R1-Turbo40k$1.00$3.00
DeepSeek-R1-0528-Turbo32k$1.00$3.00
DeepSeek-R1-Distill-Llama-70B128k$0.50$1.00

FAQ

What is DeepSeek?

DeepSeek is a family of high-performance, open-source language models developed by DeepSeek AI. These models, including DeepSeek-R1 and DeepSeek-V3, are optimized for reasoning, coding, and multi-modal tasks. DeepInfra hosts these models with scalable, low-latency inference infrastructure and OpenAI-compatible APIs—so you can use them immediately without managing your own GPUs.

How do DeepSeek models compare to OpenAI or Claude models?

DeepSeek-R1 achieves performance comparable to OpenAI’s GPT-4 and Claude 3 on math, reasoning, and coding tasks. DeepSeek-V3, a 671B-parameter MoE model, rivals top-tier closed-source LLMs while remaining fully open-source. DeepInfra provides low-latency access and predictable pricing that’s often more affordable.

Are the DeepSeek models open source?

Yes. All DeepSeek models are MIT-licensed, with open weights and training details publicly released. This ensures transparency, customizability, and legal flexibility for commercial use.

How do I integrate DeepSeek models into my application?

You can integrate DeepSeek models seamlessly using DeepInfra’s OpenAI-compatible API. Just replace your existing base URL with DeepInfra’s endpoint and use your DeepInfra API key—no infrastructure setup required. DeepInfra also supports integration through libraries like openai, litellm, and other SDKs, making it easy to switch or scale your workloads instantly.

What are the pricing details for using DeepSeek models on DeepInfra?

Pricing is usage-based:
  • Input Tokens: between $0.25 and $1.00 per million
  • Output Tokens: between $0.40 and $3.00 per million
Prices vary slightly by model. There are no upfront fees, and you only pay for what you use.

How do I get started using DeepSeek on DeepInfra?

Sign in with GitHub at deepinfra.com
  • Get your API key
  • Test models directly from the browser, cURL, or SDKs
  • Review pricing on your usage dashboard
Within minutes, you can deploy apps using DeepSeek models—without any infrastructure setup.