Published on 2023.03.02 by Nikola Borisov
This is easy, just get our
deepctl command line tool installed on your machine.
Here is how to run it:
curl https://deepinfra.com/get.sh | sh
Make sure all is well by running:
Login to DeepInfra (using your GitHub account)
This will take you to the browser to login in DeepInfra using your GitHub account. When you are done, come back to the terminal.
Now lets actually deploy some models to production and use them for inference. It is really easy.
deepctl deploy create -m openai/whisper-small
This will deploy the
whisper-small model from the
openai organization to production.
It only takes a few seconds to deploy and now you are ready to use it for inference.
Once a model is deployed on DeepInfra, you can use it with either our rest API or our deepctl command line too. Here is how to use it with the command line tool:
deepctl infer -m openai/whisper-small -i audio=@/path/to/audio.mp3