We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

FLUX.2 is live! High-fidelity image generation made simple.

Chat with books using DeepInfra and LlamaIndex
Published on 2024.06.07 by Oguz Vuruskaner
Chat with books using DeepInfra and LlamaIndex

As DeepInfra, we are excited to announce our integration with LlamaIndex. LlamaIndex is a powerful library that allows you to index and search documents using various language models and embeddings. In this blog post, we will show you how to chat with books using DeepInfra and LlamaIndex.

We will be using the Project Gutenberg library to get the text of the book "Crime and Punishment" by Fyodor Dostoevsky. We will then use the Meta Llama 3 70B language model and the MiniLM embedding model to chat with the book.

Requirements

  • Python 3.9 or higher
  • DeepInfra API Key

Installation

First, let's create a virtual environment and activate it:

python3 -m venv venv
source venv/bin/activate
copy

Here are the required packages to install:

llama-index
llama-index-llms-deepinfra
llama-index-embeddings-deepinfra
copy

Let's install them:

pip install llama-index llama-index-llms-deepinfra llama-index-embeddings-deepinfra
copy

Before getting started, we also need to get the API key for DeepInfra. You can get your DeepInfra API key from here.

Let's create a .env file in the root directory of the project and add the following lines:

DEEPINFRA_API_TOKEN=YOUR_DEEPINFRA_API_KEY
copy

Code Implementation

Here's a Python script to chat with the book "Crime and Punishment":

import requests
from dotenv import load_dotenv, find_dotenv
import re

_ = load_dotenv(find_dotenv())

from llama_index.core import VectorStoreIndex, Document

from llama_index.llms.deepinfra import DeepInfraLLM
from llama_index.embeddings.deepinfra import DeepInfraEmbeddingModel

LLM = "meta-llama/Meta-Llama-3-70B-Instruct"
EMBEDDING = "sentence-transformers/all-MiniLM-L12-v2"
BOOK_TITLE = "Crime and Punishment"


def maybe_get_gutenberg_book_id(title):
    url = f"http://gutendex.com/books/?search={title}"
    response = requests.get(url)
    books = response.json()["results"]
    for book in books:
        if title.lower() in book["title"].lower():
            return book["id"]
    return None


def get_document(book_id):
    url = f"https://www.gutenberg.org/files/{book_id}/{book_id}-0.txt"
    response = requests.get(url)
    text = response.text
    # Get rid of binary characters.
    text = re.sub(r"[^\x00-\x7F]+", "", text)
    return Document(text=text)


if __name__ == "__main__":

    llm = DeepInfraLLM(LLM, max_tokens=1000)
    embed_model = DeepInfraEmbeddingModel(EMBEDDING)

    book_id = maybe_get_gutenberg_book_id(BOOK_TITLE)
    document = get_document(book_id)

    index = VectorStoreIndex.from_documents([document], embed_model=embed_model)
    chat_engine = index.as_chat_engine(
        llm=llm, embed_model=embed_model, max_iterations=20
    )

    response = chat_engine.chat(
        "Summarize the discussion between Raskolnikov and Pyotr Petrovich"
    )
    print(response)

    # The conversation between Raskolnikov and Pyotr Petrovich takes place at the office of...
copy

Conclusion

Voila! You have successfully chatted with the book "Crime and Punishment" using DeepInfra and LlamaIndex. You can now use this code snippet to chat with any book of your choice. Enjoy reading!

For more information on LlamaIndex, please visit our LLM documentation and Embedding documentation.

Feel free to experiment with other books and questions to explore the capabilities of DeepInfra. See you in the next blog post!

Happy chatting! 📚🦙

Related articles
Function Calling for AI APIs in DeepInfra — How to Extend Your AI with Real-World Logic - Deep InfraFunction Calling for AI APIs in DeepInfra — How to Extend Your AI with Real-World Logic - Deep Infra<p>Modern large language models (LLMs) are incredibly powerful at understanding and generating text, but until recently they were largely static: they could only respond based on patterns in their training data. Function calling changes that. It lets language models interact with external logic — your own code, APIs, utilities, or business systems — while still [&hellip;]</p>
Seed Anchoring and Parameter Tweaking with SDXL Turbo: Create Stunning Cubist ArtSeed Anchoring and Parameter Tweaking with SDXL Turbo: Create Stunning Cubist ArtIn this blog post, we're going to explore how to create stunning cubist art using SDXL Turbo using some advanced image generation techniques.
Lzlv model for roleplaying and creative workLzlv model for roleplaying and creative workRecently an interesting new model got released. It is called Lzlv, and it is basically a merge of few existing models. This model is using the Vicuna prompt format, so keep this in mind if you are using our raw [API](/lizpreciatior/lzlv_70b...