FLUX.2 is live! High-fidelity image generation made simple.

Delivering AI at scale requires multiple things working together: open-weight models, powerful hardware, and inference optimizations. DeepInfra was among the first to deploy production workloads on the NVIDIA Blackwell platform — and we're seeing up to 20x cost reductions when combining Mixture of Experts (MoE) architectures with Blackwell optimizations. Here's what we've built and learned.
The NVIDIA Blackwell platform delivers a step change in AI inference performance. DeepInfra has built a comprehensive optimization stack to take full advantage of these capabilities.
The gains come from three layers working together: hardware acceleration from the NVIDIA Blackwell architecture, efficiency from open-weight MoE models, and DeepInfra's inference optimizations built on NVIDIA TensorRT-LLM, including speculative decoding and advanced memory management.
Layers of inference optimizations.
The real gains come from combining multiple optimizations. Here's what that looks like comparing a dense 405B model to an MoE model with 400B total parameters but only 17B active per token:
| Dense 405B Model | MoE 400B (FP8) Model | MoE 400B (FP8) Model | MoE 400B (NVFP4) Model | |
|---|---|---|---|---|
| Platform | NVIDIA H200 | NVIDIA H200 | NVIDIA HGX™ B200 | NVIDIA HGX™ B200 |
| Cost/1M tokens | $1.00 | $0.20 | $0.10 | $0.05 |
| Cost vs Dense | baseline | 5x lower cost | 10x lower cost | 20x lower cost |
From $1.00/1M tokens (Dense 405B) to $0.05/1M tokens (MoE NVFP4 on NVIDIA Blackwell) — 20x more cost effective.
20x cheaper than dense. Similar parameter scale, fraction of the cost.
Each layer compounds: MoE architecture reduces active compute, the Blackwell platform accelerates throughput, with NVFP4 quantization cutting memory and compute further.
Traditional autoregressive generation produces one token at a time. DeepInfra leverages Multi-Token Prediction (MTP) and Eagle speculative decoding to accelerate generation. By predicting multiple tokens simultaneously and verifying them in parallel, these techniques improve throughput on supported models.
Beyond the core optimizations, DeepInfra implements:
What does this look like in practice? For Latitude, AI is the experience.
AI Dungeon serves 1.5 million monthly active users generating dynamic, AI-powered game narratives, and Latitude is expanding with Voyage, an upcoming AI RPG platform where players can create or play worlds with full freedom of action. Model responses aren't just part of the gameplay loop — they're the centerpiece of everything Latitude builds. Every improvement in model performance directly translates to better player experiences, higher engagement and retention, and ultimately drives revenue and growth.
AI Dungeon generates both narrative text and imagery in real-time as players explore dynamic stories.
Running large open-source MoE models on DeepInfra's Blackwell-based platform allows Latitude to deliver fast, reliable responses at a cost that scales with their player base.
Players can choose from multiple AI-generated continuations, each requiring real-time inference.
"For AI Dungeon, the model response is the game. Every millisecond of latency, every quality improvement matters — it directly impacts how players feel, how long they stay, and whether they come back. DeepInfra on NVIDIA Blackwell gives us the performance we need at a cost that actually works at scale."
— Nick Walton, CEO, Latitude
A key factor in Latitude's success is the flexibility that open-weight models provide. Rather than being locked into a single model, they can evaluate and deploy the best model for each specific use case — whether optimizing for creative storytelling, fast responses, or cost efficiency.
DeepInfra's platform supports this experimentation by offering a broad catalog of optimized open-source models, all benefiting from the NVIDIA Blackwell platform performance gains. Customers like Latitude can test different models, compare performance characteristics, and deploy the optimal configuration for their workload — without infrastructure overhead.
The combination of the NVIDIA Blackwell platform, DeepInfra's inference optimizations, and the flexibility of open-weight models creates a powerful platform for AI-native applications. As models continue to grow in capability and efficiency, this infrastructure foundation will enable the next generation of AI experiences.
For developers and companies looking to reduce inference costs, the path forward is clear: modern hardware, purpose-built optimizations, and the freedom to choose the right model for the job.
To learn more about DeepInfra's Blackwell deployment or discuss your inference needs, visit deepinfra.com.
Deploy Custom LLMs on DeepInfraDid you just finetune your favorite model and are wondering where to run it?
Well, we have you covered. Simple API and predictable pricing.
Put your model on huggingface
Use a private repo, if you wish, we don't mind. Create a hf access token just
for the repo for better security.
Create c...
Function Calling for AI APIs in DeepInfra — How to Extend Your AI with Real-World Logic - Deep Infra<p>Modern large language models (LLMs) are incredibly powerful at understanding and generating text, but until recently they were largely static: they could only respond based on patterns in their training data. Function calling changes that. It lets language models interact with external logic — your own code, APIs, utilities, or business systems — while still […]</p>
Fork of Text Generation Inference.The text generation inference open source project by huggingface looked like a promising
framework for serving large language models (LLM). However, huggingface announced that they
will change the license of code with version v1.0.0. While the previous license Apache 2.0
was permissive, the new on...© 2026 Deep Infra. All rights reserved.