We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

Nemotron 3 Nano Omni — the first multimodal model in the Nemotron 3 family, now on DeepInfra!

DeepInfra is now a supported Hugging Face Inference Provider
Published on 2026.04.29 by Aray Sultanbekova
DeepInfra is now a supported Hugging Face Inference Provider

DeepInfra is now a supported Hugging Face Inference Provider

DeepInfra is officially live as an Inference Provider on the Hugging Face Hub. You can now call DeepInfra-hosted models directly from Hugging Face model pages, through our OpenAI-compatible router (use it with any OpenAI SDK), or via the Hugging Face SDKs in Python and JavaScript.

What's new

Hugging Face's Inference Providers system lets developers run inference against partner platforms without leaving the Hub. As of today, DeepInfra is one of those partners.

At launch, we support chat completion and text generation tasks. That covers most open-weight LLMs people deploy in production — DeepSeek V4, Kimi-K2.6, GLM-5.1, Llama, Qwen, Mistral, and many more. Support for our other model categories (text-to-image, text-to-video, embeddings, speech) will roll out next.

You can browse every DeepInfra-supported model here: 👉 huggingface.co/models?inference_provider=deepinfra

How to use it

You have two ways to authenticate, and both work with the same code.

Option 1 — Use your DeepInfra API key. Add it to your Hugging Face provider settings. Requests go directly to DeepInfra and are billed to your DeepInfra account at standard rates.

Option 2 — Use your Hugging Face token. Hugging Face will route your request to DeepInfra and bill it to your HF account. PRO users get $2 of inference credits each month; free users get a small monthly quota.

Python

from huggingface_hub import InferenceClient

client = InferenceClient()

completion = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-V4-Pro:deepinfra",
    messages=[
        {"role": "user", "content": "Write a Fibonacci function with memoization."}
    ],
)

print(completion.choices[0].message)
copy

JavaScript

import { InferenceClient } from "@huggingface/inference";

const client = new InferenceClient(process.env.HF_TOKEN);

const completion = await client.chatCompletion({
  model: "deepseek-ai/DeepSeek-V4-Pro:deepinfra",
  messages: [{ role: "user", content: "Hello!" }],
});

console.log(completion.choices[0].message);
copy

Using the OpenAI SDK

The Hugging Face router is OpenAI-compatible, so existing OpenAI code works with one line changed — point base_url at the HF router:

from openai import OpenAI

client = OpenAI(
    base_url="https://router.huggingface.co/v1",
    api_key=os.environ["HF_TOKEN"],
)

completion = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-V4-Pro:deepinfra",
    messages=[{"role": "user", "content": "Hello!"}],
)
copy

The only thing that changes is the :deepinfra suffix on the model id.

What this means for our users

If you already use DeepInfra, nothing changes — your existing API and account work exactly as they always have. What's new is reach.

  • Discoverability. Every Hugging Face model page that runs on DeepInfra now shows us as a supported provider, with one-click code snippets in Python, JavaScript, and cURL.
  • Same pricing, no markup. Hugging Face passes through DeepInfra's per-token rates without any added fees. You pay the same whether you call us directly or via the HF router.
  • Drop-in for HF-based workflows. If your team already uses Hugging Face for model search, evaluation, or agent tooling (Pi, OpenCode, Hermes Agents, VS Code with Copilot, and more), DeepInfra is now a one-line provider swap.
  • Try before you buy. Use the Inference Playground to test any DeepInfra-supported model in the browser before wiring it into your stack.
Related articles
The easiest way to build AI applications with Llama 2 LLMs.The easiest way to build AI applications with Llama 2 LLMs.The long awaited Llama 2 models are finally here! We are excited to show you how to use them with DeepInfra. These collection of models represent the state of the art in open source language models. They are made available by Meta AI and the l...
LLM API Provider Performance KPIs 101: TTFT, Throughput & End-to-End GoalsLLM API Provider Performance KPIs 101: TTFT, Throughput & End-to-End Goals<p>Fast, predictable responses turn a clever demo into a dependable product. If you’re building on an LLM API provider like DeepInfra, three performance ideas will carry you surprisingly far: time-to-first-token (TTFT), throughput, and an explicit end-to-end (E2E) goal that blends speed, reliability, and cost into something users actually feel. This beginner-friendly guide explains each KPI [&hellip;]</p>
Best Models for OpenClaw: Top Picks for Agentic WorkloadsBest Models for OpenClaw: Top Picks for Agentic Workloads<p>When you configure OpenClaw for the first time, the model picker looks like a minor config detail. It isn&#8217;t. The model you connect decides whether your agents complete tasks reliably or fall apart halfway through a multi-step workflow. It sets what you pay per completed job, not just per token. And it determines whether your [&hellip;]</p>