We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

Qwen3-Max-Thinking state-of-the-art reasoning model at your fingertips!

The easiest way to build AI applications with Llama 2 LLMs.
Published on 2023.08.02 by Nikola Borisov
The easiest way to build AI applications with Llama 2 LLMs.

The long awaited Llama 2 models are finally here! We are excited to show you how to use them with DeepInfra. These collection of models represent the state of the art in open source language models. They are made available by Meta AI and the license allows you to use them for commercial purposes. So now is the time to build your next AI application with Llama 2 hosted by DeepInfra, and save a ton of money compared to OpenAI's API.

Picking the right model

There are 3 different sizes of Llama 2 models as well as chat variants of each size:

Depending on the application you are building, you might want to use a different model. Smaller models are faster and cheaper to run, per token generated. Larger models take longer to run and cost more per token generated, but they are more accurate.

Getting started

Simply create an account on DeepInfra and get yourself an API Key.

# set the API key as an environment variable
AUTH_TOKEN=<your-api-key>
copy

Each model has a detailed API documentation page that will guide you through the process of using it. For example, here is the API documentation for the llama-2-7b-chat model.

Running inference

Making an inference request is as easy as making a POST request to our API.

curl -X POST \
    -d '{"input": "Who is Bill Clinton?"}'  \
    -H "Authorization: bearer $AUTH_TOKEN"  \
    -H 'Content-Type: application/json'  \
    'https://api.deepinfra.com/v1/inference/meta-llama/Llama-2-7b-chat-hf'
copy

And you will get output like this:

{
   "inference_status" : {
      "cost" : 0.00454849982634187,
      "runtime_ms" : 9097,
      "status" : "succeeded"
   },
   "request_id" : "RKQsJyO5n7ZLif------",
   "results" : [
      {
         "generated_text" : "Who is Bill Clinton?\n\nAnswer: Bill Clinton is an American politician who served as the 42nd President of the United States from 1993 to 2001. He was born on August 19, 1946, in Hope, Arkansas, and grew up in a poor family. Clinton graduated from Georgetown University and received a Rhodes Scholarship to study at Oxford University. He later attended Yale Law School and became a professor of law at the University of Arkansas.\n\nClinton entered politics in the 1970s and served as Attorney General of Arkansas from 1979 to 1981. He was elected Governor of Arkansas in 1982 and served four terms, from 1983 to 1992. In 1992, Clinton was elected President of the United States, defeating incumbent President George H.W. Bush.\n\nDuring his presidency, Clinton implemented several notable policies, including the Don't Ask, Don't Tell Repeal Act, which allowed LGBT individuals to serve openly in the military, and the North American Free"
      }
   ]
}
copy

It is easy to build AI applications with Llama 2 models hosted by DeepInfra.

If you need any help, just reach out to us on our Discord server.

Related articles
GLM-4.6 vs DeepSeek-V3.2: Performance, Benchmarks & DeepInfra ResultsGLM-4.6 vs DeepSeek-V3.2: Performance, Benchmarks & DeepInfra Results<p>The open-source LLM ecosystem has evolved rapidly, and two models stand out as leaders in capability, efficiency, and practical usability: GLM-4.6, Zhipu AI’s high-capacity reasoning model with a 200k-token context window, and DeepSeek-V3.2, a sparsely activated Mixture-of-Experts architecture engineered for exceptional performance per dollar. Both models are powerful. Both are versatile. Both are widely adopted [&hellip;]</p>
Power the Next Era of Image Generation with FLUX.2 Visual Intelligence on DeepInfraPower the Next Era of Image Generation with FLUX.2 Visual Intelligence on DeepInfraDeepInfra is excited to support FLUX.2 from day zero, bringing the newest visual intelligence model from Black Forest Labs to our platform at launch. We make it straightforward for developers, creators, and enterprises to run the model with high performance, transparent pricing, and an API designed for productivity.
Lzlv model for roleplaying and creative workLzlv model for roleplaying and creative workRecently an interesting new model got released. It is called Lzlv, and it is basically a merge of few existing models. This model is using the Vicuna prompt format, so keep this in mind if you are using our raw [API](/lizpreciatior/lzlv_70b...