We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

DeepInfra raises $107M Series B to scale the inference cloud — read the announcement

How to use CivitAI LoRAs: 5-Minute AI Guide to Stunning Double Exposure Art
Published on 2025.01.23 by Oguz Vuruskaner
How to use CivitAI LoRAs: 5-Minute AI Guide to Stunning Double Exposure Art

Double exposure is a photography technique that combines multiple images into a single frame, creating a dreamlike and artistic effect. With the advent of AI image generation, we can now create stunning double exposure art in minutes using LoRA models. In this guide, we'll walk through how to use the Flux Double Exposure Magic LoRA from CivitAI with DeepInfra's deployment platform.

What You'll Need

  • A CivitAI account (free)
  • A DeepInfra account (free)

Set Up a LoRA model

  1. Log in to your DeepInfra account
  2. Navigate to the Deployments section
  3. Click the "New Deployment" button in the top right corner
  4. Select "LoRA text to image" from the options

Once you navigate to this section, you will see a screen like this: Text-to-image LoRA Dashboard 5. Write your preferred model name. 6. We'll use FLUX Dev for this LoRA. You can keep it as it is. 7. Add the following CivitAI URL: https://civitai.com/models/715497/flux-double-exposure-magic?modelVersionId=859666 8. Click "Upload" button, and that's it. VOILA!

Once LoRA processing has completed, you should navigate to

 http://deepinfra.com/<your_name>/<lora_name>
copy

When you have navigated, you should view our classical dashboard, but with your LoRA name.

An Example: Cyberpunk Double Exposure

Now let's create some stunning visuals... Let's break down this stunning example:

bo-exposure, double exposure, cyberpunk city, robot face
copy

Example of AI-generated cyberpunk double exposure art

Key Takeaway ⚠️

Notice how we use BOTH bo-exposure and double exposure. This combination is crucial - using both terms together gives you the best double exposure effect.

More tutorials are on the way. See you in the next one 👋

Related articles
Kimi K2.6 Pricing Guide 2026: Compare Costs & Deployment StrategiesKimi K2.6 Pricing Guide 2026: Compare Costs & Deployment Strategies<p>Kimi K2.6 matters because it sits in a rare spot: open weights, broad provider availability, and a real spread in pricing and runtime performance depending on where you buy it. Artificial Analysis tracks the model across nine API providers, with blended pricing ranging from $1.15 to $2.15 per 1M tokens and major differences in throughput [&hellip;]</p>
DeepSeek V4 Pro: Model Overview, Features & Performance GuideDeepSeek V4 Pro: Model Overview, Features & Performance Guide<p>DeepSeek V4 Pro is a 1.6-trillion parameter Mixture-of-Experts (MoE) model from DeepSeek, released on April 24, 2026 under the MIT license. It is designed for advanced reasoning, complex software engineering, and long-running agentic tasks, and arrives alongside DeepSeek-V4-Flash, a lighter 284B-parameter variant built for faster, lower-cost inference. The V4 series is DeepSeek&#8217;s first two-tier lineup [&hellip;]</p>
DeepSeek V3.2 API Benchmarks: Latency, Throughput & CostDeepSeek V3.2 API Benchmarks: Latency, Throughput & Cost<p>About DeepSeek V3.2 DeepSeek V3.2 is a state-of-the-art large language model that unifies conversational speed and deep reasoning in a single 685B parameter Mixture of Experts (MoE) architecture with 37B parameters activated per token. It is built around three key technical breakthroughs: DeepSeek V3.2 achieved gold-medal performance in the 2025 International Mathematical Olympiad (IMO) and [&hellip;]</p>