FLUX.2 is live! High-fidelity image generation made simple.

Databricks Dolly is instruction tuned 12 billion parameter casual language model based on EleutherAI's pythia-12b. It was pretrained on The Pile, GPT-J's pretraining corpus. databricks-dolly-15k open source instruction following dataset was used to tune the model.
To get started, you'll need an API key from DeepInfra.
You can deploy the databricks/dolly-v2-12b model easily through the web dashboard or by using our API. The model will be automatically deployed when you first run an inference request.
You can use it with our REST API. Here's how to call the model using curl:
curl -X POST \
-d '{"prompt": "Who is Elvis Presley?"}' \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer YOUR_API_KEY" \
'https://api.deepinfra.com/v1/inference/databricks/dolly-v2-12b'
We charge per inference request execution time, $0.0005 per second. Inference runs on Nvidia A100 cards. To see the full documentation of how to call this model, check out the model page on our website.
You can browse all available models on our models page.
If you have any question, just reach out to us on our Discord server.
Getting StartedGetting an API Key
To use DeepInfra's services, you'll need an API key. You can get one by signing up on our platform.
Sign up or log in to your DeepInfra account at deepinfra.com
Navigate to the Dashboard and select API Keys
Create a new ...
Long Context models incomingMany users requested longer context models to help them summarize bigger chunks
of text or write novels with ease.
We're proud to announce our long context model selection that will grow bigger in the comming weeks.
Models
Mistral-based models have a context size of 32k, and amazon recently r...
Kimi K2 0905 API from Deepinfra: Practical Speed, Predictable Costs, Built for Devs - Deep Infra<p>Kimi K2 0905 is Moonshot’s long-context Mixture-of-Experts update designed for agentic and coding workflows. With a context window up to ~256K tokens, it can ingest large codebases, multi-file documents, or long conversations and still deliver structured, high-quality outputs. But real-world performance isn’t defined by the model alone—it’s determined by the inference provider that serves it: […]</p>
© 2025 Deep Infra. All rights reserved.