Nemotron 3 Nano Omni — the first multimodal model in the Nemotron 3 family, now on DeepInfra!

DeepInfra is officially live as an Inference Provider on the Hugging Face Hub. You can now call DeepInfra-hosted models directly from Hugging Face model pages, through our OpenAI-compatible router (use it with any OpenAI SDK), or via the Hugging Face SDKs in Python and JavaScript.
Hugging Face's Inference Providers system lets developers run inference against partner platforms without leaving the Hub. As of today, DeepInfra is one of those partners.
At launch, we support chat completion and text generation tasks. That covers most open-weight LLMs people deploy in production — DeepSeek V4, Kimi-K2.6, GLM-5.1, Llama, Qwen, Mistral, and many more. Support for our other model categories (text-to-image, text-to-video, embeddings, speech) will roll out next.
You can browse every DeepInfra-supported model here: 👉 huggingface.co/models?inference_provider=deepinfra
You have two ways to authenticate, and both work with the same code.
Option 1 — Use your DeepInfra API key. Add it to your Hugging Face provider settings. Requests go directly to DeepInfra and are billed to your DeepInfra account at standard rates.
Option 2 — Use your Hugging Face token. Hugging Face will route your request to DeepInfra and bill it to your HF account. PRO users get $2 of inference credits each month; free users get a small monthly quota.
from huggingface_hub import InferenceClient
client = InferenceClient()
completion = client.chat.completions.create(
model="deepseek-ai/DeepSeek-V4-Pro:deepinfra",
messages=[
{"role": "user", "content": "Write a Fibonacci function with memoization."}
],
)
print(completion.choices[0].message)
import { InferenceClient } from "@huggingface/inference";
const client = new InferenceClient(process.env.HF_TOKEN);
const completion = await client.chatCompletion({
model: "deepseek-ai/DeepSeek-V4-Pro:deepinfra",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(completion.choices[0].message);
The Hugging Face router is OpenAI-compatible, so existing OpenAI code works with one line changed — point base_url at the HF router:
from openai import OpenAI
client = OpenAI(
base_url="https://router.huggingface.co/v1",
api_key=os.environ["HF_TOKEN"],
)
completion = client.chat.completions.create(
model="deepseek-ai/DeepSeek-V4-Pro:deepinfra",
messages=[{"role": "user", "content": "Hello!"}],
)
The only thing that changes is the :deepinfra suffix on the model id.
If you already use DeepInfra, nothing changes — your existing API and account work exactly as they always have. What's new is reach.
The easiest way to build AI applications with Llama 2 LLMs.The long awaited Llama 2 models are finally here!
We are excited to show you how to use them with DeepInfra. These collection of models represent
the state of the art in open source language models.
They are made available by Meta AI and the l...
LLM API Provider Performance KPIs 101: TTFT, Throughput & End-to-End Goals<p>Fast, predictable responses turn a clever demo into a dependable product. If you’re building on an LLM API provider like DeepInfra, three performance ideas will carry you surprisingly far: time-to-first-token (TTFT), throughput, and an explicit end-to-end (E2E) goal that blends speed, reliability, and cost into something users actually feel. This beginner-friendly guide explains each KPI […]</p>
Best Models for OpenClaw: Top Picks for Agentic Workloads<p>When you configure OpenClaw for the first time, the model picker looks like a minor config detail. It isn’t. The model you connect decides whether your agents complete tasks reliably or fall apart halfway through a multi-step workflow. It sets what you pay per completed job, not just per token. And it determines whether your […]</p>
© 2026 Deep Infra. All rights reserved.