DeepInfra raises $107M Series B to scale the inference cloud — read the announcement

Double exposure is a photography technique that combines multiple images into a single frame, creating a dreamlike and artistic effect. With the advent of AI image generation, we can now create stunning double exposure art in minutes using LoRA models. In this guide, we'll walk through how to use the Flux Double Exposure Magic LoRA from CivitAI with DeepInfra's deployment platform.
Once you navigate to this section, you will see a screen like this:
5. Write your preferred model name.
6. We'll use FLUX Dev for this LoRA. You can keep it as it is.
7. Add the following CivitAI URL: https://civitai.com/models/715497/flux-double-exposure-magic?modelVersionId=859666
8. Click "Upload" button, and that's it. VOILA!
Once LoRA processing has completed, you should navigate to
http://deepinfra.com/<your_name>/<lora_name>
When you have navigated, you should view our classical dashboard, but with your LoRA name.
Now let's create some stunning visuals... Let's break down this stunning example:
bo-exposure, double exposure, cyberpunk city, robot face

Notice how we use BOTH bo-exposure and double exposure. This combination is crucial - using both terms together gives you the best double exposure effect.
More tutorials are on the way. See you in the next one 👋
DeepInfra Launches Access to NVIDIA Nemotron Models for Vision, Retrieval, and AI SafetyDeepInfra is serving the new, open NVIDIA Nemotron vision language and OCR AI models from day zero of their release. As a leading inference provider committed to performance and cost-efficiency, we're making these cutting-edge models available at the industry's best prices, empowering developers to build specialized AI agents without compromising on budget or performance.
Kimi K2.6 Model Overview: Architecture, Features & Capabilities<p>Kimi K2.6 is Moonshot AI’s latest flagship open-source model, released on April 20, 2026 under a Modified MIT license. It is a native multimodal agentic model built on a 1-trillion parameter Mixture-of-Experts (MoE) architecture, with 32 billion parameters activated per token. The model is designed for long-horizon coding, autonomous execution, and multi-agent orchestration, and is […]</p>
DeepSeek V3.2 API Benchmarks: Latency, Throughput & Cost<p>About DeepSeek V3.2 DeepSeek V3.2 is a state-of-the-art large language model that unifies conversational speed and deep reasoning in a single 685B parameter Mixture of Experts (MoE) architecture with 37B parameters activated per token. It is built around three key technical breakthroughs: DeepSeek V3.2 achieved gold-medal performance in the 2025 International Mathematical Olympiad (IMO) and […]</p>
© 2026 DeepInfra. All rights reserved.