We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

GLM-5.1 - state-of-the-art agentic engineering, now available on DeepInfra!

How to use CivitAI LoRAs: 5-Minute AI Guide to Stunning Double Exposure Art
Published on 2025.01.23 by Oguz Vuruskaner
How to use CivitAI LoRAs: 5-Minute AI Guide to Stunning Double Exposure Art

Double exposure is a photography technique that combines multiple images into a single frame, creating a dreamlike and artistic effect. With the advent of AI image generation, we can now create stunning double exposure art in minutes using LoRA models. In this guide, we'll walk through how to use the Flux Double Exposure Magic LoRA from CivitAI with DeepInfra's deployment platform.

What You'll Need

  • A CivitAI account (free)
  • A DeepInfra account (free)

Set Up a LoRA model

  1. Log in to your DeepInfra account
  2. Navigate to the Deployments section
  3. Click the "New Deployment" button in the top right corner
  4. Select "LoRA text to image" from the options

Once you navigate to this section, you will see a screen like this: Text-to-image LoRA Dashboard 5. Write your preferred model name. 6. We'll use FLUX Dev for this LoRA. You can keep it as it is. 7. Add the following CivitAI URL: https://civitai.com/models/715497/flux-double-exposure-magic?modelVersionId=859666 8. Click "Upload" button, and that's it. VOILA!

Once LoRA processing has completed, you should navigate to

 http://deepinfra.com/<your_name>/<lora_name>
copy

When you have navigated, you should view our classical dashboard, but with your LoRA name.

An Example: Cyberpunk Double Exposure

Now let's create some stunning visuals... Let's break down this stunning example:

bo-exposure, double exposure, cyberpunk city, robot face
copy

Example of AI-generated cyberpunk double exposure art

Key Takeaway ⚠️

Notice how we use BOTH bo-exposure and double exposure. This combination is crucial - using both terms together gives you the best double exposure effect.

More tutorials are on the way. See you in the next one 👋

Related articles
Qwen3.5 35B A3B API Benchmarks: Latency, Throughput & CostQwen3.5 35B A3B API Benchmarks: Latency, Throughput & Cost<p>About Qwen3.5 35B A3B Qwen3.5 35B A3B is a native vision-language model released by Alibaba Cloud in February 2026. It uses a hybrid architecture that integrates Gated Delta Networks with a sparse Mixture-of-Experts model, achieving higher inference efficiency. With 35 billion total parameters and only 3 billion activated per token through 256 experts (8 routed [&hellip;]</p>
From Precision to Quantization: A Practical Guide to Faster, Cheaper LLMsFrom Precision to Quantization: A Practical Guide to Faster, Cheaper LLMs<p>Large language models live and die by numbers—literally trillions of them. How finely we store those numbers (their precision) determines how much memory a model needs, how fast it runs, and sometimes how good its answers are. This article walks from the basics to the deep end: we’ll start with how computers even store a [&hellip;]</p>
Kimi K2 0905 API Benchmarks: Latency, Throughput & CostKimi K2 0905 API Benchmarks: Latency, Throughput & Cost<p>About Kimi K2 0905 Kimi K2 0905 is a state-of-the-art large language model developed by Moonshot AI, representing a significant advancement in open-weight AI capabilities. This Mixture-of-Experts (MoE) model features 1 trillion total parameters with 32 billion activated parameters per forward pass, making it highly efficient while maintaining frontier-level performance. The model supports a 256k [&hellip;]</p>