🚀 New models by Bria.ai, generate and edit images at scale 🚀
DeepSeek develops advanced foundation models optimized for computational efficiency and strong generalization across diverse tasks. The architecture incorporates recent advances in transformer-based systems, delivering robust performance in both zero-shot and fine-tuned scenarios. Models are pretrained on rigorously filtered multilingual corpora with specialized optimizations for mathematical reasoning and algorithmic tasks. The inference stack achieves competitive throughput while maintaining low latency, making it suitable for production deployment. Researchers and engineers can leverage these models for tasks ranging from natural language processing to complex analytical problem-solving.
DeepSeek-V3.2-Exp is an intermediate step toward the next-generation architecture of the DeepSeek models by introducing DeepSeek Sparse Attention—a sparse attention mechanism designed to explore and validate optimizations for training and inference efficiency in long-context scenarios.
Price per 1M input tokens
$0.27
Price per 1M output tokens
$0.40
Release Date
09/29/2025
Context Size
163,840
Quantization
fp4
# Assume openai>=1.0.0
from openai import OpenAI
# Create an OpenAI client with your deepinfra token and endpoint
openai = OpenAI(
api_key="$DEEPINFRA_TOKEN",
base_url="https://api.deepinfra.com/v1/openai",
)
chat_completion = openai.chat.completions.create(
model="deepseek-ai/DeepSeek-V3.2-Exp",
messages=[{"role": "user", "content": "Hello"}],
)
print(chat_completion.choices[0].message.content)
print(chat_completion.usage.prompt_tokens, chat_completion.usage.completion_tokens)
# Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?
# 11 25
DeepSeek-V3.1 Terminus is an update to DeepSeek V3.1 that maintains the model's original capabilities while addressing issues reported by users, including language consistency and agent capabilities, further optimizing the model's performance in coding and search agents. It is a large hybrid reasoning model (671B parameters, 37B active) that supports both thinking and non-thinking modes. It extends the DeepSeek-V3 base with a two-phase long-context training process. Users can control the reasoning behaviour with the reasoning enabled boolean. Learn more in our docs The model improves tool use, code generation, and reasoning efficiency, achieving performance comparable to DeepSeek-R1 on difficult benchmarks while responding more quickly. It supports structured tool calling, code agents, and search agents, making it suitable for research, coding, and agentic workflows.
Price per 1M input tokens
$0.27
Price per 1M cached input tokens
$0.216
Price per 1M output tokens
$1.00
Release Date
09/22/2025
Context Size
163,840
Quantization
fp4
# Assume openai>=1.0.0
from openai import OpenAI
# Create an OpenAI client with your deepinfra token and endpoint
openai = OpenAI(
api_key="$DEEPINFRA_TOKEN",
base_url="https://api.deepinfra.com/v1/openai",
)
chat_completion = openai.chat.completions.create(
model="deepseek-ai/DeepSeek-V3.1-Terminus",
messages=[{"role": "user", "content": "Hello"}],
)
print(chat_completion.choices[0].message.content)
print(chat_completion.usage.prompt_tokens, chat_completion.usage.completion_tokens)
# Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?
# 11 25
DeepSeek's models are a suite of advanced AI systems that prioritize efficiency, scalability, and real-world applicability.
Model | Context | $ per 1M input tokens | $ per 1M output tokens | Actions |
---|---|---|---|---|
DeepSeek-V3.2-Exp | 160k | $0.27 | $0.40 | |
DeepSeek-V3.1-Terminus | 160k | $0.27 / $0.216 cached | $1.00 | |
DeepSeek-V3.1 | 160k | $0.27 / $0.216 cached | $1.00 | |
DeepSeek-V3-0324 | 160k | $0.25 | $0.88 | |
DeepSeek-V3 | 160k | $0.38 | $0.89 | |
DeepSeek-R1 | 160k | $0.70 | $2.40 | |
DeepSeek-R1-0528 | 160k | $0.50 / $0.40 cached | $2.15 | |
DeepSeek-R1-Turbo | 40k | $1.00 | $3.00 | |
DeepSeek-R1-0528-Turbo | 32k | $1.00 | $3.00 | |
DeepSeek-R1-Distill-Llama-70B | 128k | $0.50 | $1.00 |
DeepSeek is a family of high-performance, open-source language models developed by DeepSeek AI. These models, including DeepSeek-R1 and DeepSeek-V3, are optimized for reasoning, coding, and multi-modal tasks. DeepInfra hosts these models with scalable, low-latency inference infrastructure and OpenAI-compatible APIs—so you can use them immediately without managing your own GPUs.
DeepSeek-R1 achieves performance comparable to OpenAI’s GPT-4 and Claude 3 on math, reasoning, and coding tasks. DeepSeek-V3, a 671B-parameter MoE model, rivals top-tier closed-source LLMs while remaining fully open-source. DeepInfra provides low-latency access and predictable pricing that’s often more affordable.
Yes. All DeepSeek models are MIT-licensed, with open weights and training details publicly released. This ensures transparency, customizability, and legal flexibility for commercial use.
© 2025 Deep Infra. All rights reserved.