Browse deepinfra models:

All categories and models you can try out and directly use in deepinfra:
Search

Category/text-to-image

Text-to-image AI models are a powerful technology that can generate images based on textual descriptions, making them an essential tool for content creation, assistive technology, entertainment, and education.

The text description is first processed by a natural language processing (NLP) model, which extracts relevant features and keywords. This information is then passed to a generative model, which uses trained parameters to generate an image that matches the textual description. This innovative technology has the potential to transform visual content creation, making it more accessible and user-friendly.

For marketing and advertising professionals, text-to-image AI models can help create images that are tailored to specific campaigns or target audiences. Visually impaired individuals can use these models to better understand and interact with their environment, making them a valuable assistive technology. The entertainment industry can use text-to-image models to generate images for video games, virtual reality, and other immersive experiences. Finally, educators can use text-to-image models to create interactive diagrams, charts, and other resources to help students better understand complex concepts.

stabilityai/sd3.5 cover image
featured
$0.06 / img
  • text-to-image

At 8 billion parameters, with superior quality and prompt adherence, this base model is the most powerful in the Stable Diffusion family. This model is ideal for professional use cases at 1 megapixel resolution

black-forest-labs/FLUX-1.1-pro cover image
featured
$0.04 / img
  • text-to-image

Black Forest Labs' latest state-of-the art proprietary model sporting top of the line prompt following, visual quality, details and output diversity.

black-forest-labs/FLUX-1-schnell cover image
featured
$0.0005 x (width / 1024) x (height / 1024) x iters
  • text-to-image

FLUX.1 [schnell] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. This model offers cutting-edge output quality and competitive prompt following, matching the performance of closed source alternatives. Trained using latent adversarial diffusion distillation, FLUX.1 [schnell] can generate high-quality images in only 1 to 4 steps.

black-forest-labs/FLUX-1-dev cover image
featured
$0.009 x (width / 1024) x (height / 1024) x (iters / 25)
  • text-to-image

FLUX.1-dev is a state-of-the-art 12 billion parameter rectified flow transformer developed by Black Forest Labs. This model excels in text-to-image generation, providing highly accurate and detailed outputs. It is particularly well-regarded for its ability to follow complex prompts and generate anatomically accurate images, especially with challenging details like hands and faces.

black-forest-labs/FLUX-pro cover image
featured
$0.05 / img
  • text-to-image

Black Forest Labs' first flagship model based on Flux latent rectified flow transformers

stabilityai/sd3.5-medium cover image
featured
bf16
$0.03 / img
  • text-to-image

At 2.5 billion parameters, with improved MMDiT-X architecture and training methods, this model is designed to run “out of the box” on consumer hardware, striking a balance between quality and ease of customization. It is capable of generating images ranging between 0.25 and 2 megapixel resolution.

CompVis/stable-diffusion-v1-4 cover image
Replaced
  • text-to-image

Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.

XpucT/Deliberate cover image
Replaced
  • text-to-image

The Deliberate Model allows for the creation of anything desired, with the potential for better results as the user's knowledge and detail in the prompt increase. The model is ideal for meticulous anatomy artists, creative prompt writers, art designers, and those seeking explicit content.

prompthero/openjourney cover image
Replaced
  • text-to-image

Text to image model based on Stable Diffusion.

runwayml/stable-diffusion-v1-5 cover image
Replaced
  • text-to-image

Most widely used version of Stable Diffusion. Trained on 512x512 images, it can generate realistic images given text description

stabilityai/sdxl-turbo cover image
$0.0002 x (width / 1024) x (height / 1024) x (iters / 5)
  • text-to-image

The SDXL Turbo model, developed by Stability AI, is an optimized, fast text-to-image generative model. It is a distilled version of SDXL 1.0, leveraging Adversarial Diffusion Distillation (ADD) to generate high-quality images in less steps.

stabilityai/stable-diffusion-2-1 cover image
Replaced
  • text-to-image

Stable Diffusion is a latent text-to-image diffusion model. Generate realistic images given text description