GLM-5.1 - state-of-the-art agentic engineering, now available on DeepInfra!

As DeepInfra, we are excited to announce our integration with LlamaIndex. LlamaIndex is a powerful library that allows you to index and search documents using various language models and embeddings. In this blog post, we will show you how to chat with books using DeepInfra and LlamaIndex.
We will be using the Project Gutenberg library to get the text of the book "Crime and Punishment" by Fyodor Dostoevsky. We will then use the Meta Llama 3 70B language model and the MiniLM embedding model to chat with the book.
First, let's create a virtual environment and activate it:
python3 -m venv venv
source venv/bin/activate
Here are the required packages to install:
llama-index
llama-index-llms-deepinfra
llama-index-embeddings-deepinfra
Let's install them:
pip install llama-index llama-index-llms-deepinfra llama-index-embeddings-deepinfra
Before getting started, we also need to get the API key for DeepInfra. You can get your DeepInfra API key from here.
Let's create a .env file in the root directory of the project and add the following lines:
DEEPINFRA_API_TOKEN=YOUR_DEEPINFRA_API_KEY
Here's a Python script to chat with the book "Crime and Punishment":
import requests
from dotenv import load_dotenv, find_dotenv
import re
_ = load_dotenv(find_dotenv())
from llama_index.core import VectorStoreIndex, Document
from llama_index.llms.deepinfra import DeepInfraLLM
from llama_index.embeddings.deepinfra import DeepInfraEmbeddingModel
LLM = "meta-llama/Meta-Llama-3-70B-Instruct"
EMBEDDING = "sentence-transformers/all-MiniLM-L12-v2"
BOOK_TITLE = "Crime and Punishment"
def maybe_get_gutenberg_book_id(title):
url = f"http://gutendex.com/books/?search={title}"
response = requests.get(url)
books = response.json()["results"]
for book in books:
if title.lower() in book["title"].lower():
return book["id"]
return None
def get_document(book_id):
url = f"https://www.gutenberg.org/files/{book_id}/{book_id}-0.txt"
response = requests.get(url)
text = response.text
# Get rid of binary characters.
text = re.sub(r"[^\x00-\x7F]+", "", text)
return Document(text=text)
if __name__ == "__main__":
llm = DeepInfraLLM(LLM, max_tokens=1000)
embed_model = DeepInfraEmbeddingModel(EMBEDDING)
book_id = maybe_get_gutenberg_book_id(BOOK_TITLE)
document = get_document(book_id)
index = VectorStoreIndex.from_documents([document], embed_model=embed_model)
chat_engine = index.as_chat_engine(
llm=llm, embed_model=embed_model, max_iterations=20
)
response = chat_engine.chat(
"Summarize the discussion between Raskolnikov and Pyotr Petrovich"
)
print(response)
# The conversation between Raskolnikov and Pyotr Petrovich takes place at the office of...
Voila! You have successfully chatted with the book "Crime and Punishment" using DeepInfra and LlamaIndex. You can now use this code snippet to chat with any book of your choice. Enjoy reading!
For more information on LlamaIndex, please visit our LLM documentation and Embedding documentation.
Feel free to experiment with other books and questions to explore the capabilities of DeepInfra. See you in the next blog post!
Happy chatting! 📚🦙
Qwen3.5 122B A10B API Benchmarks: Latency, Throughput & Cost<p>About Qwen3.5 122B A10B Qwen3.5 122B A10B is Alibaba Cloud’s mid-tier multimodal foundation model, released in February 2026. It is a multimodal vision-language Mixture-of-Experts model supporting text, image, and video inputs, designed for native multimodal agent applications. It features 122 billion total parameters with 10 billion activated per token through a hybrid architecture that integrates […]</p>
Reliable JSON-Only Responses with DeepInfra LLMs<p>When large language models are used inside real applications, their role changes fundamentally. Instead of chatting with users, they become infrastructure components: extracting information, transforming text, driving workflows, or powering APIs. In these scenarios, natural language is no longer the desired output. What applications need is structured data — and very often, that structure is […]</p>
Qwen3.5 397B A17B API Benchmarks: Latency, Throughput & Cost<p>About Qwen3.5 397B A17B Qwen3.5 397B A17B is Alibaba Cloud’s largest and most capable multimodal foundation model, released in February 2026. It features a hybrid Mixture-of-Experts (MoE) architecture with 397 billion total parameters and 17 billion active parameters per inference pass, utilizing 512 experts with a routing mechanism selecting a subset per token. This sparse […]</p>
© 2026 Deep Infra. All rights reserved.