stabilityai/sdxl-turbo cover image

stabilityai/sdxl-turbo

The SDXL Turbo model, developed by Stability AI, is an optimized, fast text-to-image generative model. It is a distilled version of SDXL 1.0, leveraging Adversarial Diffusion Distillation (ADD) to generate high-quality images in less steps.

The SDXL Turbo model, developed by Stability AI, is an optimized, fast text-to-image generative model. It is a distilled version of SDXL 1.0, leveraging Adversarial Diffusion Distillation (ADD) to generate high-quality images in less steps.

Public
$0.0002 x (width / 1024) x (height / 1024) x (iters / 5)
Project

HTTP/cURL API

You can use cURL or any other http client to run inferences:

curl -X POST \
    -d '{"prompt": "A photo of an astronaut riding a horse on Mars."}'  \
    -H "Authorization: bearer $DEEPINFRA_TOKEN"  \
    -H 'Content-Type: application/json'  \
    'https://api.deepinfra.com/v1/inference/stabilityai/sdxl-turbo'

which will give you back something similar to:

{
  "images": [
    "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAIAAACQd1PeAAAADElEQVQI12PQz3wAAAJDAXkkWn+MAAAAAElFTkSuQmCC"
  ],
  "nsfw_content_detected": [
    false
  ],
  "seed": 42,
  "request_id": null,
  "inference_status": {
    "status": "unknown",
    "runtime_ms": 0,
    "cost": 0.0,
    "tokens_generated": 0,
    "tokens_input": 0
  }
}

Input fields

promptstring

text prompt


num_imagesinteger

number of images to generate

Default value: 1

Range: 1 ≤ num_images ≤ 4


num_inference_stepsinteger

number of denoising steps

Default value: 5

Range: 1 ≤ num_inference_steps ≤ 10


widthinteger

image width in px

Default value: 1024

Range: 128 ≤ width ≤ 2048


heightinteger

image height in px

Default value: 1024

Range: 128 ≤ height ≤ 2048


seedinteger

random seed, empty means random

Range: 0 ≤ seed < 18446744073709552000


guidance_scalenumber

classifier-free guidance, higher means follow prompt more closely

Default value: 1

Range: 0 ≤ guidance_scale ≤ 20


webhookfile

The webhook to call when inference is done, by default you will get the output in the response of your inference request

Input Schema

Output Schema