We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

Nemotron 3 Nano Omni — the first multimodal model in the Nemotron 3 family, now on DeepInfra!

Compare Llama2 vs OpenAI models for FREE.
Published on 2023.09.28 by Nikola Borisov
Compare Llama2 vs OpenAI models for FREE.

At DeepInfra we host the best open source LLM models. We are always working hard to make our APIs simple and easy to use.

Today we are excited to announce a very easy way to quickly try our models like Llama2 70b and Mistral 7b and compare them to OpenAI's models. You only need to change the API endpoint URL and the model name to quickly see if these models are a good fit for your application.

Here is a quick example of how to use the OpenAI Python client with our models:

import openai

# Point OpenAI client to our endpoint
openai.api_base = "https://api.deepinfra.com/v1/openai"
# Just leave the API key empty. You don't need it to try our models. 
openai.api_key = ""

# Your chosen model here
MODEL_DI = "meta-llama/Llama-2-70b-chat-hf"
chat_completion = openai.ChatCompletion.create(
    model="meta-llama/Llama-2-70b-chat-hf",
    messages=[{"role": "user", "content": "Hello world"}],
    stream=True,
)

# print the chat completion
for event in chat_completion:
    print(event.choices)
copy

Rate limits on no API key

To make it as simple as possible you don't even have to create an account with DeepInfra to try our models. Just pass empty string as api_key and you are good to go. We rate limit the unauthenticated requests by IP address.

Pricing and Production ready

When you are ready to use our models in production, you can create an account at DeepInfra and get an API key. We offer the best pricing for the llama 2 70b model at just $1 per 1M tokens. If you need any help, just reach out to us on our Discord server.

Related articles
Best Models for OpenClaw: Top Picks for Agentic WorkloadsBest Models for OpenClaw: Top Picks for Agentic Workloads<p>When you configure OpenClaw for the first time, the model picker looks like a minor config detail. It isn&#8217;t. The model you connect decides whether your agents complete tasks reliably or fall apart halfway through a multi-step workflow. It sets what you pay per completed job, not just per token. And it determines whether your [&hellip;]</p>
NVIDIA Nemotron 3 Nano 30B API Benchmarks: Latency & CostNVIDIA Nemotron 3 Nano 30B API Benchmarks: Latency & Cost<p>About NVIDIA Nemotron 3 Nano 30B A3B NVIDIA Nemotron 3 Nano 30B A3B is a large language model trained from scratch by NVIDIA, designed as a unified model for both reasoning and non-reasoning tasks. It is part of the Nemotron 3 family — NVIDIA&#8217;s most efficient family of open models, built for agentic AI applications. [&hellip;]</p>
Kimi K2 0905 API Benchmarks: Latency, Throughput & CostKimi K2 0905 API Benchmarks: Latency, Throughput & Cost<p>About Kimi K2 0905 Kimi K2 0905 is a state-of-the-art large language model developed by Moonshot AI, representing a significant advancement in open-weight AI capabilities. This Mixture-of-Experts (MoE) model features 1 trillion total parameters with 32 billion activated parameters per forward pass, making it highly efficient while maintaining frontier-level performance. The model supports a 256k [&hellip;]</p>