Qwen3-Max-Thinking state-of-the-art reasoning model at your fingertips!

To use DeepInfra's services, you'll need an API key. You can get one by signing up on our platform.
Your API key will be used to authenticate all your requests to the DeepInfra API.
Now lets actually deploy some models to production and use them for inference. It is really easy.
You can deploy models through the web dashboard or by using our API. Models are automatically deployed when you first make an inference request.
Once a model is deployed on DeepInfra, you can use it with our REST API. Here's how to use it with curl:
curl -X POST \
-F "audio=@/path/to/audio.mp3" \
-H "Authorization: Bearer YOUR_API_KEY" \
'https://api.deepinfra.com/v1/inference/openai/whisper-small'
Reliable JSON-Only Responses with DeepInfra LLMs<p>When large language models are used inside real applications, their role changes fundamentally. Instead of chatting with users, they become infrastructure components: extracting information, transforming text, driving workflows, or powering APIs. In these scenarios, natural language is no longer the desired output. What applications need is structured data — and very often, that structure is […]</p>
Guaranteed JSON output on Open-Source LLMs.DeepInfra is proud to announce that we have released "JSON mode" across all of our text language models. It is available through the "response_format" object, which currently supports only {"type": "json_object"}
Our JSON mode will guarantee that all tokens returned in the output of a langua...
Art That Talks Back: A Hands-On Tutorial on Talking ImagesTurn any image into a talking masterpiece with this step-by-step guide using DeepInfra’s GenAI models.© 2026 Deep Infra. All rights reserved.