An open state-of-the-art video generation model with high-fidelity motion and strong prompt adherence. This model dramatically closes the gap between closed and open video generation systems
An open state-of-the-art video generation model with high-fidelity motion and strong prompt adherence. This model dramatically closes the gap between closed and open video generation systems
You can use cURL or any other http client to run inferences:
curl -X POST \
-d '{"prompt": "A hand with delicate fingers picks up a bright yellow lemon from a wooden bowl filled with lemons and sprigs of mint against a peach-colored background. The hand gently tosses the lemon up and catches it, showcasing its smooth texture. A beige string bag sits beside the bowl, adding a rustic touch to the scene. Additional lemons, one halved, are scattered around the base of the bowl. The even lighting enhances the vibrant colors and creates a fresh, inviting atmosphere."}' \
-H "Authorization: bearer $DEEPINFRA_TOKEN" \
-H 'Content-Type: application/json' \
'https://api.deepinfra.com/v1/inference/genmo/mochi-1-preview'
which will give you back something similar to:
{
"video_url": "/model/inference/pyramid_sample.mp4",
"seed": "12345",
"request_id": null,
"inference_status": {
"status": "unknown",
"runtime_ms": 0,
"cost": 0.0,
"tokens_generated": 0,
"tokens_input": 0
}
}
num_inference_steps
integernumber of inference steps (more steps -> better quality)
Default value: 100
Range: 2 ≤ num_inference_steps ≤ 300
webhook
fileThe webhook to call when inference is done, by default you will get the output in the response of your inference request