Wan-AI/Wan2.1-T2V-14B cover image

Wan-AI/Wan2.1-T2V-14B

The Wan2.1 14B model is a high-capacity, state-of-the-art video foundation model capable of producing both 480P and 720P videos. It excels at capturing complex prompts and generating visually rich, detailed scenes, making it ideal for high-end creative tasks.

The Wan2.1 14B model is a high-capacity, state-of-the-art video foundation model capable of producing both 480P and 720P videos. It excels at capturing complex prompts and generating visually rich, detailed scenes, making it ideal for high-end creative tasks.

Public
$0.40 / video
ProjectLicense

HTTP/cURL API

You can use cURL or any other http client to run inferences:

curl -X POST \
    -d '{"prompt": "A hand with delicate fingers picks up a bright yellow lemon from a wooden bowl filled with lemons and sprigs of mint against a peach-colored background. The hand gently tosses the lemon up and catches it, showcasing its smooth texture. A beige string bag sits beside the bowl, adding a rustic touch to the scene. Additional lemons, one halved, are scattered around the base of the bowl. The even lighting enhances the vibrant colors and creates a fresh, inviting atmosphere."}'  \
    -H "Authorization: bearer $DEEPINFRA_TOKEN"  \
    -H 'Content-Type: application/json'  \
    'https://api.deepinfra.com/v1/inference/Wan-AI/Wan2.1-T2V-14B'
copy

which will give you back something similar to:

{
  "video_url": "/model/inference/pyramid_sample.mp4",
  "seed": "12345",
  "request_id": null,
  "inference_status": {
    "status": "unknown",
    "runtime_ms": 0,
    "cost": 0.0,
    "tokens_generated": 0,
    "tokens_input": 0
  }
}

copy

Input fields

promptstring

text prompt


guidance_scalenumber

Controls how closely the generated video follows the text prompt. Higher values (>1.0) produce content more closely aligned with the prompt but may reduce overall quality. A value of 1.0 disables guidance.

Default value: 5


seedinteger

specify as seed for reproducible output


negative_promptstring

negative text prompt


webhookfile

The webhook to call when inference is done, by default you will get the output in the response of your inference request

Input Schema

Output Schema