NVIDIA Nemotron 3 Super - blazing-fast agentic AI, ready to deploy today!

To use DeepInfra's services, you'll need an API key. You can get one by signing up on our platform.
Your API key will be used to authenticate all your requests to the DeepInfra API.
Now lets actually deploy some models to production and use them for inference. It is really easy.
You can deploy models through the web dashboard or by using our API. Models are automatically deployed when you first make an inference request.
Once a model is deployed on DeepInfra, you can use it with our REST API. Here's how to use it with curl:
curl -X POST \
-F "audio=@/path/to/audio.mp3" \
-H "Authorization: Bearer YOUR_API_KEY" \
'https://api.deepinfra.com/v1/inference/openai/whisper-small'
Model Distillation Making AI Models EfficientAI Model Distillation Definition & Methodology
Model distillation is the art of teaching a smaller, simpler model to perform as well as a larger one. It's like training an apprentice to take over a master's work—streamlining operations with comparable performance . If you're struggling with depl...
Pricing 101: Token Math & Cost-Per-Completion Explained<p>LLM pricing can feel opaque until you translate it into a few simple numbers: input tokens, output tokens, and price per million. Every request you send—system prompt, chat history, RAG context, tool-call JSON—counts as input; everything the model writes back counts as output. Once you know those two counts, the cost of a completion is […]</p>
GLM-4.6 vs DeepSeek-V3.2: Performance, Benchmarks & DeepInfra Results<p>The open-source LLM ecosystem has evolved rapidly, and two models stand out as leaders in capability, efficiency, and practical usability: GLM-4.6, Zhipu AI’s high-capacity reasoning model with a 200k-token context window, and DeepSeek-V3.2, a sparsely activated Mixture-of-Experts architecture engineered for exceptional performance per dollar. Both models are powerful. Both are versatile. Both are widely adopted […]</p>
© 2026 Deep Infra. All rights reserved.