We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

FLUX.2 is live! High-fidelity image generation made simple.

How to deploy google/flan-ul2 - simple. (open source ChatGPT alternative)
Published on 2023.03.17 by Nikola Borisov
How to deploy google/flan-ul2 - simple. (open source ChatGPT alternative)

Flan-UL2 is probably the best open source model available right now for chatbots. In this post we will show you how to get started with it very easily. Flan-UL2 is large - 20B parameters. It is fine tuned version of the UL2 model using Flan dataset. Because this is quite a large model it is not easy to deploy it on your own machine. If you rent a GPU in AWS, it will cost you around $1.5 per hour or $1080 per month. Using DeepInfra model deployments you only pay for the inference time, and we do not charge for cold starts. Our pricing is $0.0005 per second of running inference on Nvidia A100. Which translates to about $0.0001 per token generated by Flan-UL2.

Also check out the model page https://deepinfra.com/google/flan-ul2. You can run inferences, check the docs/API for running inferences via curl.

Getting started

First, you'll need to get an API key from the DeepInfra dashboard.

  1. Sign up or log in to your DeepInfra account
  2. Navigate to the API Keys section in the dashboard
  3. Create a new API key for authentication

Deployment

You can deploy the google/flan-ul2 model easily through the web dashboard or API. The model will be automatically deployed when you first make an inference request.

Inference

You can use it with our REST API. Here's how to use it with curl:

curl -X POST \
    -d '{"prompt": "Hello, how are you?"}' \
    -H 'Content-Type: application/json' \
    -H "Authorization: Bearer YOUR_API_KEY" \
    'https://api.deepinfra.com/v1/inference/google/flan-ul2'
copy

To see the full documentation of how to call this model, check out the model page on the DeepInfra website or the API documentation.

If you want a list of all the models you can use on DeepInfra, you can visit the models page on our website or use the API to get a list of available models.

There is no easier way to get started with arguably one of the best open source LLM. This was quite easy right? You did not have to deal with docker, transformers, pytorch, etc. If you have any question, just reach out to us on our Discord server.

Related articles
Compare Llama2 vs OpenAI models for FREE.Compare Llama2 vs OpenAI models for FREE.At DeepInfra we host the best open source LLM models. We are always working hard to make our APIs simple and easy to use. Today we are excited to announce a very easy way to quickly try our models like Llama2 70b and [Mistral 7b](/mistralai/Mistral-7B-Instruc...
The easiest way to build AI applications with Llama 2 LLMs.The easiest way to build AI applications with Llama 2 LLMs.The long awaited Llama 2 models are finally here! We are excited to show you how to use them with DeepInfra. These collection of models represent the state of the art in open source language models. They are made available by Meta AI and the l...
How to use CivitAI LoRAs: 5-Minute AI Guide to Stunning Double Exposure ArtHow to use CivitAI LoRAs: 5-Minute AI Guide to Stunning Double Exposure ArtLearn how to create mesmerizing double exposure art in minutes using AI. This guide shows you how to set up a LoRA model from CivitAI and create stunning artistic compositions that blend multiple images into dreamlike masterpieces.