We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

Qwen3-Max-Thinking state-of-the-art reasoning model at your fingertips!

Inference LoRA adapter model
Published on 2024.12.06 by Askar Aitzhan
Inference LoRA adapter model

Understanding LoRA inference

Concepts

  • Base model: The original model that is used as a starting point.
  • LoRA adapter model: A small model that is used to adapt the base model for a specific task.
  • LoRA Rank: The rank of the matrix that is used to adapt the model.

What you need to inference with LoRA adapter model

  1. Supported base model
  2. LoRA adapter model hosted on HuggingFace
  3. HuggingFace token if the LoRA adapter model is private
  4. DeepInfra account

How to inference with LoRA adapter in DeepInfra

  1. Go to the dashboard
  2. Click on the 'New Deployment' button
  3. Click on the 'LoRA Model' tab
  4. Fill the form:
    • LoRA model name: model name used to reference the deployment
    • Hugging Face Model Name: Hugging Face model name
    • Hugging Face Token: (optional) Hugging Face token if the LoRA adapter model is private
  5. Click on the 'Upload' button

Note: The list of supported base models is listed on the same page. If you need a base model that is not listed, please contact us at feedback@deepinfra.com

Rate limits on LoRA adapter model

Rate limit will apply on combined traffic of all LoRA adapter models with the same base model. For example, if you have 2 LoRA adapter models with the same base model, and have rate limit of 200. Those 2 LoRA adapter models combined will have rate limit of 200.

Pricing on LoRA adapter model

Pricing is 50% higher than base model.

How is LoRA adapter model speed compared to base model speed?

LoRA adapter model speed is lower than base model, because there is additional compute and memory overhead to apply the LoRA adapter. From our benchmarks, the LoRA adapter model speed is about 50-60% slower than base model.

How to make LoRA adapter model faster?

You could merge the LoRA adapter with the base model to reduce the overhead. And use custom deployment, the speed will be close to the base model.

Related articles
Long Context models incomingLong Context models incomingMany users requested longer context models to help them summarize bigger chunks of text or write novels with ease. We're proud to announce our long context model selection that will grow bigger in the comming weeks. Models Mistral-based models have a context size of 32k, and amazon recently r...
Nemotron 3 Nano Explained: NVIDIA’s Efficient Small LLM and Why It MattersNemotron 3 Nano Explained: NVIDIA’s Efficient Small LLM and Why It Matters<p>The open-source LLM space has exploded with models competing across size, efficiency, and reasoning capability. But while frontier models dominate headlines with enormous parameter counts, a different category has quietly become essential for real-world deployment: small yet high-performance models optimized for edge devices, private on-prem systems, and cost-sensitive applications. NVIDIA’s Nemotron family brings together open [&hellip;]</p>
Build a Streaming Chat Backend in 10 MinutesBuild a Streaming Chat Backend in 10 Minutes<p>When large language models move from demos into real systems, expectations change. The goal is no longer to produce clever text, but to deliver predictable latency, responsive behavior, and reliable infrastructure characteristics. In chat-based systems, especially, how fast a response starts often matters more than how fast it finishes. This is where token streaming becomes [&hellip;]</p>