We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

FLUX.2 is live! High-fidelity image generation made simple.

Inference LoRA adapter model
Published on 2024.12.06 by Askar Aitzhan
Inference LoRA adapter model

Understanding LoRA inference

Concepts

  • Base model: The original model that is used as a starting point.
  • LoRA adapter model: A small model that is used to adapt the base model for a specific task.
  • LoRA Rank: The rank of the matrix that is used to adapt the model.

What you need to inference with LoRA adapter model

  1. Supported base model
  2. LoRA adapter model hosted on HuggingFace
  3. HuggingFace token if the LoRA adapter model is private
  4. DeepInfra account

How to inference with LoRA adapter in DeepInfra

  1. Go to the dashboard
  2. Click on the 'New Deployment' button
  3. Click on the 'LoRA Model' tab
  4. Fill the form:
    • LoRA model name: model name used to reference the deployment
    • Hugging Face Model Name: Hugging Face model name
    • Hugging Face Token: (optional) Hugging Face token if the LoRA adapter model is private
  5. Click on the 'Upload' button

Note: The list of supported base models is listed on the same page. If you need a base model that is not listed, please contact us at feedback@deepinfra.com

Rate limits on LoRA adapter model

Rate limit will apply on combined traffic of all LoRA adapter models with the same base model. For example, if you have 2 LoRA adapter models with the same base model, and have rate limit of 200. Those 2 LoRA adapter models combined will have rate limit of 200.

Pricing on LoRA adapter model

Pricing is 50% higher than base model.

How is LoRA adapter model speed compared to base model speed?

LoRA adapter model speed is lower than base model, because there is additional compute and memory overhead to apply the LoRA adapter. From our benchmarks, the LoRA adapter model speed is about 50-60% slower than base model.

How to make LoRA adapter model faster?

You could merge the LoRA adapter with the base model to reduce the overhead. And use custom deployment, the speed will be close to the base model.

Related articles
Pricing 101: Token Math & Cost-Per-Completion ExplainedPricing 101: Token Math & Cost-Per-Completion Explained<p>LLM pricing can feel opaque until you translate it into a few simple numbers: input tokens, output tokens, and price per million. Every request you send—system prompt, chat history, RAG context, tool-call JSON—counts as input; everything the model writes back counts as output. Once you know those two counts, the cost of a completion is [&hellip;]</p>
How to deploy google/flan-ul2 - simple. (open source ChatGPT alternative)How to deploy google/flan-ul2 - simple. (open source ChatGPT alternative)Flan-UL2 is probably the best open source model available right now for chatbots. In this post we will show you how to get started with it very easily. Flan-UL2 is large - 20B parameters. It is fine tuned version of the UL2 model using Flan dataset. Because this is quite a large model it is not eas...
Llama 3.1 70B Instruct API from DeepInfra: Snappy Starts, Fair Pricing, Production Fit - Deep InfraLlama 3.1 70B Instruct API from DeepInfra: Snappy Starts, Fair Pricing, Production Fit - Deep Infra<p>Llama 3.1 70B Instruct is Meta’s widely-used, instruction-tuned model for high-quality dialogue and tool use. With a ~131K-token context window, it can read long prompts and multi-file inputs—great for agents, RAG, and IDE assistants. But how “good” it feels in practice depends just as much on the inference provider as on the model: infra, batching, [&hellip;]</p>