Documentation

Introduction

DeepInfra makes it easy to run the latest machine learning models in the cloud.

You focus on your application and your business while we do the heavy lifting for you. Servers. GPUs. Scaling. Monitoring. All wrapped up in a simple, scalable and cost-effective inference API. That's what we do.

You just use our API through REST, Python or JavaScript. We even support most of OpenAI APIs making it easy to migrate and benefit of the huge cost savings.

It is easier and cost-effective to run your inference on DeepInfra instead of using your own hardware or other cloud providers. You only pay for what you use. For example in the case of large language models(LLMs) you only pay for the input + output tokens.

You can checkout all available models or deploy your own. We are constantly adding more. DeepInfra is usualy amongst the first to add a new model once it is available.

Checkout the Getting Started for a quick dive.

You can find your authorization tokens, monitor your deployments, usage and logs in the Dashboard

We offer LangChain integration for supported LLMs.

For announcements and tutorials please check our Blog