We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

🚀 New models by Bria.ai, generate and edit images at scale 🚀

A short intro on running Stable Diffusion on DeepInfra
Published on 2023.03.08 by Iskren
A short intro on running Stable Diffusion on DeepInfra

Pick a model

You can browse available text-to-image models on the models page.

For example, we'll use runwayml/stable-diffusion-v1-5.

Using the API

curl -X POST \
    -d '{"prompt": "A photo of a cube floating in space"}' \
    -H 'Content-Type: application/json' \
    -H "Authorization: Bearer YOUR_API_KEY" \
    -o cube.jpg \
    'https://api.deepinfra.com/v1/inference/runwayml/stable-diffusion-v1-5'
copy

And check out the output in cube.jpg.

Advanced options

You can check all the available settings on the model page or via the API documentation tab.

Related articles
Use OpenAI API clients with LLaMasUse OpenAI API clients with LLaMasGetting started # create a virtual environment python3 -m venv .venv # activate environment in current shell . .venv/bin/activate # install openai python client pip install openai Choose a model meta-llama/Llama-2-70b-chat-hf [meta-llama/L...
Introducing Tool Calling with LangChain, Search the Web with Tavily and Tool Calling AgentsIntroducing Tool Calling with LangChain, Search the Web with Tavily and Tool Calling AgentsIn this blog post, we will query for the details of a recently released expansion pack for Elden Ring, a critically acclaimed game released in 2022, using the Tavily tool with the ChatDeepInfra model. Using this boilerplate, one can automate the process of searching for information with well-writt...
Deep Infra Launches Access to NVIDIA Nemotron Models for Vision, Retrieval, and AI SafetyDeep Infra Launches Access to NVIDIA Nemotron Models for Vision, Retrieval, and AI SafetyDeep Infra is serving the new, open NVIDIA Nemotron vision language and OCR AI models from day zero of their release. As a leading inference provider committed to performance and cost-efficiency, we're making these cutting-edge models available at the industry's best prices, empowering developers to build specialized AI agents without compromising on budget or performance.