Logo

Fast ML Inference, Simple API

Run the top AI models using a simple API, pay per use. Low cost, scalable and production ready infrastructure.

123456789
# install deepctlcurl https://deepinfra.com/get.sh | sh
# deploy a modeldeepctl deploy create -m stabilityai/stable-diffusion-2-1
# use itdeepctl infer -m stabilityai/stable-diffusion-2-1 \
-i prompt=magic -o images=magic.jpg

Featured models:

What we loved, used and implemented the most last month:

View all models

How to deploy Deep Infra in seconds

Powerful, self-serve machine learning platform where you can turn models into scalable APIs in just a few clicks.
Download
Download deepctl

Sign up for Deep Infra account using github or Login using github

Deploy
Deploy a model

Choose among hundreds of the most popular ML models

Production
Call Your Model in Production

Use a simple rest API to call your model.

Rocket

Deepinfra Benefits

Deploy models to production faster and cheaper with our serverless GPUs than developing the infrastructure yourself.

Low Latency
Low Latency
  • Model is deployed in multiple regions

  • Close to the user

  • Fast network

  • Autoscaling

Cost Effective
Cost Effective
  • Share resources

  • Pay per use

  • Simple pricing

Serverless
Serverless
  • No ML Ops needed

  • Better cost efficiency

  • Hassle free ML infrastructure

Simple
Simple
  • No ML Ops needed

  • Better cost efficiency

  • Hassle free ML infrastructure

Auto Scaling
Auto Scaling
  • Fast scaling infrastructure

  • Maintain low latency

  • Scale down when not needed

Run costs

Simple Pricing, Deep Infrastructure

With this pricing model, you only pay for the exact number of inferences that you use. This means that there are no long-term contracts or upfront costs, and you can easily scale up and down as your business needs change.

$0.0005

/second
$0.03
/minute (78% less than Replicate)

Nvidia A100 GPU

GPU

1 hour free

  • $0.0005 per second

  • billed per millisecond of inference execution time

Hardware
Hardware

All models run on A100 GPUs, optimized for inference performance and low latency.

Auto scaling
Auto Scaling

Our system will automatically scale the model to more hardware based on your needs. To eliminate any cold starts you can also reserve GPU memory at $0.04 per GB / hour

Auto scaling
Billing

Each inference request time is calculated with millisecond precision and added to your account. Once per month we charge you for the time you've used. You can find your current usage in your account.


© 2023 Deep Infra. All rights reserved.

Discord Logo