FLUX.2 is live! High-fidelity image generation made simple.

Many users requested longer context models to help them summarize bigger chunks of text or write novels with ease.
We're proud to announce our long context model selection that will grow bigger in the comming weeks.
Mistral-based models have a context size of 32k, and amazon recently released a model fine-tuned specifically on longer contexts.
We also recently released the highly praised Yi models. Keep in mind they don't support chat, just the old-school text completion (new models are in the works):
GLM-4.6 API: Get fast first tokens at the best $/M from Deepinfra's API - Deep Infra<p>GLM-4.6 is a high-capacity, “reasoning”-tuned model that shows up in coding copilots, long-context RAG, and multi-tool agent loops. With this class of workload, provider infrastructure determines perceived speed (first-token time), tail stability, and your unit economics. Using ArtificialAnalysis (AA) provider charts for GLM-4.6 (Reasoning), DeepInfra (FP8) pairs a sub-second Time-to-First-Token (TTFT) (0.51 s) with the […]</p>
Function Calling for AI APIs in DeepInfra — How to Extend Your AI with Real-World Logic - Deep Infra<p>Modern large language models (LLMs) are incredibly powerful at understanding and generating text, but until recently they were largely static: they could only respond based on patterns in their training data. Function calling changes that. It lets language models interact with external logic — your own code, APIs, utilities, or business systems — while still […]</p>
From Precision to Quantization: A Practical Guide to Faster, Cheaper LLMs<p>Large language models live and die by numbers—literally trillions of them. How finely we store those numbers (their precision) determines how much memory a model needs, how fast it runs, and sometimes how good its answers are. This article walks from the basics to the deep end: we’ll start with how computers even store a […]</p>
© 2026 Deep Infra. All rights reserved.