Llama 3.3-70B Turbo is a highly optimized version of the Llama 3.3-70B model, utilizing FP8 quantization to deliver significantly faster inference speeds with a minor trade-off in accuracy. The model is designed to be helpful, safe, and flexible, with a focus on responsible deployment and mitigating potential risks such as bias, toxicity, and misinformation. It achieves state-of-the-art performance on various benchmarks, including conversational tasks, language translation, and text generation.
Llama 3.3-70B Turbo is a highly optimized version of the Llama 3.3-70B model, utilizing FP8 quantization to deliver significantly faster inference speeds with a minor trade-off in accuracy. The model is designed to be helpful, safe, and flexible, with a focus on responsible deployment and mitigating potential risks such as bias, toxicity, and misinformation. It achieves state-of-the-art performance on various benchmarks, including conversational tasks, language translation, and text generation.
You can POST to our OpenAI Chat Completions compatible endpoint.
Given a list of messages from a conversation, the model will return a response.
curl "https://api.deepinfra.com/v1/openai/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $DEEPINFRA_TOKEN" \
-d '{
"model": "meta-llama/Llama-3.3-70B-Instruct-Turbo",
"messages": [
{
"role": "user",
"content": "Hello!"
}
]
}'
To which you'd get something like:
{
"id": "chatcmpl-guMTxWgpFf",
"object": "chat.completion",
"created": 1694623155,
"model": "meta-llama/Llama-3.3-70B-Instruct-Turbo",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": " Hello! It's nice to meet you. Is there something I can help you with or would you like to chat for a bit?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 15,
"completion_tokens": 16,
"total_tokens": 31,
"estimated_cost": 0.0000268
}
}
To create a longer chat-like conversation you just have to add each response message and each of the user messages to every request. This way the model will have the context and will be able to provide better answers. You can tweak it even further by providing a system message.
curl "https://api.deepinfra.com/v1/openai/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $DEEPINFRA_TOKEN" \
-d '{
"model": "meta-llama/Llama-3.3-70B-Instruct-Turbo",
"messages": [
{
"role": "system",
"content": "Respond like a michelin starred chef."
},
{
"role": "user",
"content": "Can you name at least two different techniques to cook lamb?"
},
{
"role": "assistant",
"content": "Bonjour! Let me tell you, my friend, cooking lamb is an art form, and I'"'"'m more than happy to share with you not two, but three of my favorite techniques to coax out the rich, unctuous flavors and tender textures of this majestic protein. First, we have the classic \"Sous Vide\" method. Next, we have the ancient art of \"Sous le Sable\". And finally, we have the more modern technique of \"Hot Smoking.\""
},
{
"role": "user",
"content": "Tell me more about the second method."
}
]
}'
The conversation above might return something like the following
{
"id": "chatcmpl-b23a3fb60cde42ce8f24bb980b4dee87",
"object": "chat.completion",
"created": 1715688169,
"model": "meta-llama/Llama-3.3-70B-Instruct-Turbo",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Sous le Sable, my friend! It's an ancient technique that's been used for centuries in the Middle East and North Africa. The name itself..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 149,
"total_tokens": 487,
"completion_tokens": 338,
"estimated_cost": 0.00035493
}
}
The longer the conversation gets, the more time it takes the model to generate the response. The number of messages that you can have in a conversation is limited by the context size of a model. Larger models also usually take more time to respond.
You can turn any of the requests above into a streaming request by passing "stream": true
:
curl "https://api.deepinfra.com/v1/openai/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $DEEPINFRA_TOKEN" \
-d '{
"model": "meta-llama/Llama-3.3-70B-Instruct-Turbo",
"stream": true,
"messages": [
{
"role": "user",
"content": "Hello!"
}
]
}'
to which you'd get a sequence of SSE events, finishing with [DONE]
.
data: {"id": "Rc5hsIPHOSfMP3rNSFUw9tfR", "object": "chat.completion.chunk", "created": 1694623354, "model": "meta-llama/Llama-3.3-70B-Instruct-Turbo", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " "}, "finish_reason": null}]}
data: {"id": "Rc5hsIPHOSfMP3rNSFUw9tfR", "object": "chat.completion.chunk", "created": 1694623354, "model": "meta-llama/Llama-3.3-70B-Instruct-Turbo", "choices": [{"index": 0, "delta": {"role": "assistant", "content": " Hi"}, "finish_reason": null}]}
data: {"id": "Rc5hsIPHOSfMP3rNSFUw9tfR", "object": "chat.completion.chunk", "created": 1694623354, "model": "meta-llama/Llama-3.3-70B-Instruct-Turbo", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "!"}, "finish_reason": null}]}
data: {"id": "Rc5hsIPHOSfMP3rNSFUw9tfR", "object": "chat.completion.chunk", "created": 1694623354, "model": "meta-llama/Llama-3.3-70B-Instruct-Turbo", "choices": [{"index": 0, "delta": {"role": "assistant", "content": ""}, "finish_reason": null}]}
data: {"id": "Rc5hsIPHOSfMP3rNSFUw9tfR", "object": "chat.completion.chunk", "created": 1694623354, "model": "meta-llama/Llama-3.3-70B-Instruct-Turbo", "choices": [{"index": 0, "delta": {"role": "assistant", "content": "</s>"}, "finish_reason": null}]}
data: {"id": "Rc5hsIPHOSfMP3rNSFUw9tfR", "object": "chat.completion.chunk", "created": 1694623354, "model": "meta-llama/Llama-3.3-70B-Instruct-Turbo", "choices": [{"index": 0, "delta": {}, "finish_reason": "stop"}]}
data: [DONE]
messages
arrayconversation messages: (user,assistant,tool)*,user including one system message anywhere
temperature
numberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic
Default value: 1
Range: 0 ≤ temperature ≤ 2
top_p
numberAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Default value: 1
Range: 0 < top_p ≤ 1
min_p
numberFloat that represents the minimum probability for a token to be considered, relative to the probability of the most likely token. Must be in [0, 1]. Set to 0 to disable this.
Default value: 0
Range: 0 ≤ min_p ≤ 1
top_k
integerSample from the best k (number of) tokens. 0 means off
Default value: 0
Range: 0 ≤ top_k < 1000
max_tokens
integerThe maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length.
Range: 0 ≤ max_tokens ≤ 1000000
presence_penalty
numberPositive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
Default value: 0
Range: -2 ≤ presence_penalty ≤ 2
frequency_penalty
numberPositive values penalize new tokens based on how many times they appear in the text so far, increasing the model's likelihood to talk about new topics.
Default value: 0
Range: -2 ≤ frequency_penalty ≤ 2
tool_choice
stringControls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. specifying a particular function choice is not supported currently.none is the default when no functions are present. auto is the default if functions are present.
repetition_penalty
numberAlternative penalty for repetition, but multiplicative instead of additive (> 1 penalize, < 1 encourage)
Default value: 1
Range: 0.01 ≤ repetition_penalty ≤ 5
user
stringA unique identifier representing your end-user, which can help monitor and detect abuse. Avoid sending us any identifying information. We recommend hashing user identifiers.
seed
integerSeed for random number generator. If not provided, a random seed is used. Determinism is not guaranteed.
Range: 0 ≤ seed < 9223372036854776000