We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

🚀 New models by Bria.ai, generate and edit images at scale 🚀

The easiest way to build AI applications with Llama 2 LLMs.
Published on 2023.08.02 by Nikola Borisov
The easiest way to build AI applications with Llama 2 LLMs.

The long awaited Llama 2 models are finally here! We are excited to show you how to use them with DeepInfra. These collection of models represent the state of the art in open source language models. They are made available by Meta AI and the license allows you to use them for commercial purposes. So now is the time to build your next AI application with Llama 2 hosted by DeepInfra, and save a ton of money compared to OpenAI's API.

Picking the right model

There are 3 different sizes of Llama 2 models as well as chat variants of each size:

Depending on the application you are building, you might want to use a different model. Smaller models are faster and cheaper to run, per token generated. Larger models take longer to run and cost more per token generated, but they are more accurate.

Getting started

Simply create an account on DeepInfra and get yourself an API Key.

# set the API key as an environment variable
AUTH_TOKEN=<your-api-key>
copy

Each model has a detailed API documentation page that will guide you through the process of using it. For example, here is the API documentation for the llama-2-7b-chat model.

Running inference

Making an inference request is as easy as making a POST request to our API.

curl -X POST \
    -d '{"input": "Who is Bill Clinton?"}'  \
    -H "Authorization: bearer $AUTH_TOKEN"  \
    -H 'Content-Type: application/json'  \
    'https://api.deepinfra.com/v1/inference/meta-llama/Llama-2-7b-chat-hf'
copy

And you will get output like this:

{
   "inference_status" : {
      "cost" : 0.00454849982634187,
      "runtime_ms" : 9097,
      "status" : "succeeded"
   },
   "request_id" : "RKQsJyO5n7ZLif------",
   "results" : [
      {
         "generated_text" : "Who is Bill Clinton?\n\nAnswer: Bill Clinton is an American politician who served as the 42nd President of the United States from 1993 to 2001. He was born on August 19, 1946, in Hope, Arkansas, and grew up in a poor family. Clinton graduated from Georgetown University and received a Rhodes Scholarship to study at Oxford University. He later attended Yale Law School and became a professor of law at the University of Arkansas.\n\nClinton entered politics in the 1970s and served as Attorney General of Arkansas from 1979 to 1981. He was elected Governor of Arkansas in 1982 and served four terms, from 1983 to 1992. In 1992, Clinton was elected President of the United States, defeating incumbent President George H.W. Bush.\n\nDuring his presidency, Clinton implemented several notable policies, including the Don't Ask, Don't Tell Repeal Act, which allowed LGBT individuals to serve openly in the military, and the North American Free"
      }
   ]
}
copy

It is easy to build AI applications with Llama 2 models hosted by DeepInfra.

If you need any help, just reach out to us on our Discord server.

Related articles
Introducing Tool Calling with LangChain, Search the Web with Tavily and Tool Calling AgentsIntroducing Tool Calling with LangChain, Search the Web with Tavily and Tool Calling AgentsIn this blog post, we will query for the details of a recently released expansion pack for Elden Ring, a critically acclaimed game released in 2022, using the Tavily tool with the ChatDeepInfra model. Using this boilerplate, one can automate the process of searching for information with well-writt...
How to use CivitAI LoRAs: 5-Minute AI Guide to Stunning Double Exposure ArtHow to use CivitAI LoRAs: 5-Minute AI Guide to Stunning Double Exposure ArtLearn how to create mesmerizing double exposure art in minutes using AI. This guide shows you how to set up a LoRA model from CivitAI and create stunning artistic compositions that blend multiple images into dreamlike masterpieces.
Langchain improvements: async and streamingLangchain improvements: async and streamingStarting from langchain v0.0.322 you can make efficient async generation and streaming tokens with deepinfra. Async generation The deepinfra wrapper now supports native async calls, so you can expect more performance (no more t...