What are components in LlamaIndex?

LlamaIndex has many components but instead of going over each of the components one by one, we will take a look at the components that are used to create a QueryEngine. We will focus on creating a QueryEngine component because the QueryEngine is the most relevant component for building agentic RAG workflows in LlamaIndex.

Many of the components rely on integrations with other libraries. So, before using them, we first need to learn how to install these dependencies.

Integrations

Installation

LlamaIndex installation instructions are available as a well structured overview in their GitHub repository. This might be a bit overwhelming at first, but the installation commands generally follow an easy to remember format:

pip install llama-index-{component-type}-{framework-name}

Let’s try install the depencies for an LLM and embedding component using Hugging Face inference API as framework.

pip install llama-index-llms-huggingface-api llama-index-embeddings-huggingface-api

Usage

Once installed, we can use the component in our workflow. The usage patterns have been outlined in the documentation but framework specific versions are also shown in the GitHub repository. Underneath, we can see an example of the usage of the Hugging Face inference API for an LLM component.

from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI

llm = HuggingFaceInferenceAPI(
    model_name="meta-llama/Meta-Llama-3-8B-Instruct",
    temperature=0.7,
    max_tokens=100,
    token="hf_xxx",
)

llm.complete("Hello, how are you?")
# I am good, how can I help you today?

Now, let’s dive a bit deeper into the components and see how you can use them to create a RAG pipeline.

Creating a RAG pipeline using components

LLMs are trained on enormous bodies of data to learn general knowledge. However, they may not be trained on relevant and up-to-date data. Retrieval-Augmented Generation (RAG) solves this problem by adding your data to the data LLMs already have access to.

Basic RAG Pipeline

There are five key stages within RAG, which in turn will be a part of most larger applications you build. These are:

  1. Loading: this refers to getting your data from where it lives — whether it’s text files, PDFs, another website, a database, or an API — into your workflow. LlamaHub provides hundreds of integrations to choose from.
  2. Indexing: this means creating a data structure that allows for querying the data. For LLMs this nearly always means creating vector embeddings. Which are numerical representations of the meaning of text data. Indexing can also refer to numerous other metadata strategies to make it easy to accurately find contextually relevant data based on properties.
  3. Storing: once your data is indexed you will want to store your index, as well as other metadata, to avoid having to re-index it.
  4. Querying: for any given indexing strategy there are many ways you can utilize LLMs and LlamaIndex data structures to query, including sub-queries, multi-step queries and hybrid strategies.
  5. Evaluation: a critical step in any flow is checking how effective it is relative to other strategies, or when you make changes. Evaluation provides objective measures of how accurate, faithful and fast your responses to queries are.

Next, let’s see how we can reproduce these stages using components.

Loading and embedding documents

As mentioned before, LlamaIndex can work on top of your own data, however, before accessing data, we need to load it. There are three main ways to do to load data into LlamaIndex:

  1. SimpleDirectoryReader: A built-in loader for various file types from a local directory.
  2. LlamaParse: LlamaParse, LlamaIndex’s official tool for PDF parsing, available as a managed API.
  3. LlamaHub: A registry of hundreds of data loading libraries to ingest data from any source.

Get familiar with LlamaHub loaders and LlamaParse parser for more complex data sources.

The easiest way to load data is with SimpleDirectoryReader. + It can load different types of files from a folder and turn them into Document objects that LlamaIndex can work with. Underneath, we will use the SimpleDirectoryReader to load the data from a folder.

from llama_index.core import SimpleDirectoryReader

reader = SimpleDirectoryReader(input_dir="path/to/directory")
documents = reader.load_data()

After loading our documents, we need to break them into smaller pieces called Node objects. A Node is just a chunk of text from the original document that’s easier for the AI to work with, while it still has references to the original Document object.

To create these nodes, we use the IngestionPipeline along with two simple transformations:

  1. SentenceSplitter: Breaks the document into smaller pieces of text by splitting it into sentences
  2. HuggingFaceInferenceAPIEmbedding: Turns each piece into numbers (embeddings) that the LLM can understand.

This process helps us organise our documents in a way that’s more useful for searching and analysis.

from llama_index.core import Document
from llama_index.embeddings.huggingface_api import HuggingFaceInferenceAPIEmbedding
from llama_index.core.node_parser import SentenceSplitter
from llama_index.core.ingestion import IngestionPipeline, IngestionCache

# create the pipeline with transformations
pipeline = IngestionPipeline(
    transformations=[
        SentenceSplitter(chunk_size=25, chunk_overlap=0),
        HuggingFaceInferenceAPIEmbedding("BAAI/bge-small-en-v1.5"),
    ]
)

# run the pipeline
nodes = pipeline.run(documents=[Document.example()])
To save time and computer power, **LlamaIndex caches the results of the ingestion pipeline** so you don't need to load and embed the same documents twice.

Storing and indexing documents

After creating our Node objects we need to index them to make them searchable but before we can do that, we need a place to store our data.

Within LlamaIndex, we can use a StorageContext to handle a lot of different storage types. For each of these storage types, there are different integrations with storage backends that can be used. The various data storage types that LlamaIndex supports are:

An overview of the different storage types and their integrations can be found in the LlamaIndex documentation.

We can set up a StorageContext ourselves, or let LlamaIndex create one for us when creating a search index. When we save the StorageContext, it creates files that store all the important information about our data, which makes it easy to persist and load later.

Next, let’s see how to create an index using the VectorStoreIndex and save it to your computer. We also need to provide an embedding model which should be the same as the one used during ingestion.

from llama_index.core import VectorStoreIndex
from llama_index.embeddings.huggingface_api import HuggingFaceInferenceAPIEmbedding

embed_model = HuggingFaceInferenceAPIEmbedding("BAAI/bge-small-en-v1.5")
index = VectorStoreIndex.from_documents(nodes, embed_model=embed_model)
index.storage_context.persist("path/to/vector/store")

We can load our index again using files that were created when saving the StorageContext.

from llama_index.core import StorageContext, load_index_from_storage
from llama_index.embeddings.huggingface_api import HuggingFaceInferenceAPIEmbedding

embed_model = HuggingFaceInferenceAPIEmbedding("BAAI/bge-small-en-v1.5")
storage_context = StorageContext.from_defaults(persist_dir="path/to/vector/store")
index = load_index_from_storage(storage_context, embed_model=embed_model)

Great! Now that we can save and load our index easily, let’s explore how to query it in different ways.

Querying a VectorStoreIndex with prompts and LLMs

Before we can query our index, we need to convert it to a query interface. The most common conversion options are:

We’ll focus on the query engine since it is more common for agent-like interactions. We also pass in an LLM to the query engine to use for the response.

from llama_index.llms.huggingface_api import HuggingFaceInferenceAPILM

llm = HuggingFaceInferenceAPILM(model_name="meta-llama/Meta-Llama-3-8B-Instruct")
query_engine = index.as_query_engine(llm=llm)
query_engine.query("What is the meaning of life?")
# the meaning of life is 42

Under the hood, the query engine doesn’t only use the LLM to answer the question, but also uses a ResponseSynthesizer as strategy to process the response. Once again, this is fully customisable but there are three main strategies that work well out of the box:

Take fine-grained control of your query workflows with the [low-level composition API](https://docs.llamaindex.ai/en/stable/module_guides/deploying/query_engine/usage_pattern/#low-level-composition-api). This API lets you customize and fine-tune every step of the query process to match your exact needs.

Language model won’t always perform in predictable ways, so we can’t be sure that the answer we get is always correct. We can deal with this by evaluating the quality of the answer.

Evaluation and observability

LlamaIndex has built-in tools to evaluate the quality of the answer. These evaluators use LLMs to judge and evaluate the quality of the answer according various criteria. There are three main evaluators:

from llama_index.llms.openai import OpenAI
from llama_index.core.evaluation import FaithfulnessEvaluator

query_engine = # from previous section
llm = # from previous section

# query index
evaluator = FaithfulnessEvaluator(llm=llm)
response = query_engine.query(
    "What battles took place in New York City in the American Revolution?"
)
eval_result = evaluator.evaluate_response(response=response)
print(str(eval_result.passing))

Even if we don’t directly evaluate, it is still important to understand what is going on.

Evaluation in LlamaIndex relies on a separate platform LlamaTrace. To use it we will need to create an account on LlamaTrace: https://llamatrace.com/login. Create an API key and put it in the PHOENIX_API_KEY variable below.

To install the integration package, do pip install -U llama-index-callbacks-arize-phoenix.
import llama_index
import os

PHOENIX_API_KEY = "<PHOENIX_API_KEY>"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = f"api_key={PHOENIX_API_KEY}"
llama_index.core.set_global_handler(
    "arize_phoenix", endpoint="https://llamatrace.com/v1/traces"
)
Want to learn more about components and how to use them? Continue your journey with the [Components Guides](https://docs.llamaindex.ai/en/stable/module_guides/) or the [Guide on RAG](https://docs.llamaindex.ai/en/stable/understanding/rag/).

We have seen how to use components to create a QueryEngine. Now, let’s see how we can use that same QueryEngine as a tool for an agent!.

< > Update on GitHub