We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

DeepInfra raises $107M Series B to scale the inference cloud — read the announcement

Inference LoRA adapter model
Published on 2024.12.06 by Askar Aitzhan
Inference LoRA adapter model

Understanding LoRA inference

Concepts

  • Base model: The original model that is used as a starting point.
  • LoRA adapter model: A small model that is used to adapt the base model for a specific task.
  • LoRA Rank: The rank of the matrix that is used to adapt the model.

What you need to inference with LoRA adapter model

  1. Supported base model
  2. LoRA adapter model hosted on HuggingFace
  3. HuggingFace token if the LoRA adapter model is private
  4. DeepInfra account

How to inference with LoRA adapter in DeepInfra

  1. Go to the dashboard
  2. Click on the 'New Deployment' button
  3. Click on the 'LoRA Model' tab
  4. Fill the form:
    • LoRA model name: model name used to reference the deployment
    • Hugging Face Model Name: Hugging Face model name
    • Hugging Face Token: (optional) Hugging Face token if the LoRA adapter model is private
  5. Click on the 'Upload' button

Note: The list of supported base models is listed on the same page. If you need a base model that is not listed, please contact us at feedback@deepinfra.com

Rate limits on LoRA adapter model

Rate limit will apply on combined traffic of all LoRA adapter models with the same base model. For example, if you have 2 LoRA adapter models with the same base model, and have rate limit of 200. Those 2 LoRA adapter models combined will have rate limit of 200.

Pricing on LoRA adapter model

Pricing is 50% higher than base model.

How is LoRA adapter model speed compared to base model speed?

LoRA adapter model speed is lower than base model, because there is additional compute and memory overhead to apply the LoRA adapter. From our benchmarks, the LoRA adapter model speed is about 50-60% slower than base model.

How to make LoRA adapter model faster?

You could merge the LoRA adapter with the base model to reduce the overhead. And use custom deployment, the speed will be close to the base model.

Related articles
DeepInfra is now a supported Hugging Face Inference ProviderDeepInfra is now a supported Hugging Face Inference ProviderDeepInfra is officially live as an Inference Provider on the Hugging Face Hub. You can now call DeepInfra-hosted models directly from Hugging Face model pages, through our OpenAI-compatible router (use it with any OpenAI SDK), or via the Hugging Face SDKs in Python and JavaScript.
Kimi K2.6 is Now Available on DeepInfraKimi K2.6 is Now Available on DeepInfra<p>Kimi K2.6 can coordinate up to 300 sub-agents executing 4,000 steps in a single autonomous run — Moonshot AI&#8217;s answer to the gap between what frontier models can do in a chat window and what production agentic systems actually need. Built for long-horizon coding, deep research, and complex orchestration, the model is open source under [&hellip;]</p>
How to Use OpenClaw with DeepInfra: Setup & Workflow GuideHow to Use OpenClaw with DeepInfra: Setup & Workflow Guide<p>When you first learn how to use OpenClaw, the onboarding flow asks for an API key and points you toward Anthropic or OpenAI. Reasonable starting point. For production agents running dozens of tasks a day, it&#8217;s an expensive one. OpenClaw works with any OpenAI-compatible API, so you can swap the default model for an open-weight [&hellip;]</p>