We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

DeepInfra raises $107M Series B to scale the inference cloud — read the announcement

Getting Started
Published on 2023.03.02 by Nikola Borisov
Getting Started

Getting an API Key

To use DeepInfra's services, you'll need an API key. You can get one by signing up on our platform.

  1. Sign up or log in to your DeepInfra account at deepinfra.com
  2. Navigate to the Dashboard and select API Keys
  3. Create a new API key and save it securely

Your API key will be used to authenticate all your requests to the DeepInfra API.

Deployment

Now lets actually deploy some models to production and use them for inference. It is really easy.

You can deploy models through the web dashboard or by using our API. Models are automatically deployed when you first make an inference request.

Inference

Once a model is deployed on DeepInfra, you can use it with our REST API. Here's how to use it with curl:

curl -X POST \
  -F "audio=@/path/to/audio.mp3" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  'https://api.deepinfra.com/v1/inference/openai/whisper-small'
copy
Related articles
Kimi K2.6 API Benchmarks: Latency, TPS & Cost Analysis (2026)Kimi K2.6 API Benchmarks: Latency, TPS & Cost Analysis (2026)<p>About Kimi K2.6 Kimi K2.6 is an open-source frontier model from Moonshot AI, released on April 20, 2026. It is a native multimodal agentic model built for long-horizon coding, autonomous execution, and swarm-based task orchestration. The model uses a Mixture-of-Experts (MoE) architecture with 1 trillion total parameters and 32 billion activated parameters per token, using [&hellip;]</p>
Best API for Kimi K2.5: Why DeepInfra Leads in Speed, TTFT, and ScalabilityBest API for Kimi K2.5: Why DeepInfra Leads in Speed, TTFT, and Scalability<p>Kimi K2.5 is positioned as Moonshot AI’s “do-it-all” model for modern product workflows: native multimodality (text + vision/video), Instant vs. Thinking modes, and support for agentic / multi-agent (“swarm”) execution patterns. In real applications, though, model capability is only half the story. The provider’s inference stack determines the things your users actually feel: time-to-first-token (TTFT), [&hellip;]</p>
GLM-4.7-Flash API Benchmarks: Latency, Throughput & CostGLM-4.7-Flash API Benchmarks: Latency, Throughput & Cost<p>About GLM-4.7-Flash GLM-4.7-Flash is Z.AI&#8217;s open-weights reasoning model released in January 2026. Built on a Mixture-of-Experts (MoE) Transformer architecture, it features 30 billion total parameters with only ~3 billion active per inference — making it exceptionally efficient for its capability class. The model is designed as a lightweight, cost-effective alternative to Z.AI&#8217;s flagship GLM-4.7, optimized [&hellip;]</p>