Components are building blocks for agentic workflows. LlamaIndex has many components but instead of going over each of the components one by one, we will take a look at the components that are used to create a QueryEngine
.
We will focus on those components because they are most relevant for building agentic workflows in LlamaIndex.
Many of the components rely on integrations with other libraries. So, before using them, we first need to learn how to install these dependencies.
Most frameworks add their installation guide to their main documentation but LlamaIndex keep a well structured overview in their GitHub repository. This might be a bit overwhelming at first, but the installation commands generally follow an easy to remember format:
pip install llama-index-{component-type}-{framework-name}
Let’s try install the depencies for an LLM and embedding component using Hugging Face inference API as framework.
pip install llama-index-llms-huggingface-api llama-index-embeddings-huggingface-api
Once installed, we can use the component in our workflow. The usage patterns have been outlined in the documentation but framework specific versions are also shown in the GitHub repository. Underneath, we can see an example of the usage of the Hugging Face inference API for an LLM component.
from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI
llm = HuggingFaceInferenceAPI(
model_name="meta-llama/Meta-Llama-3-8B-Instruct",
temperature=0.7,
max_tokens=100,
token="<your-token>", # Optional
)
llm.complete("Hello, how are you?")
# I am good, how can I help you today?
Now, let’s dive a bit deeper into the components and see how you can use them to create a QueryEngine
.
As mentioned before, LlamaIndex can work on top of your own data, however, before accessing data, we need to load it. There are three main ways to do to load data into LlamaIndex:
SimpleDirectoryReader
: A built-in loader for various file types from a local directory.LlamaParse
: LlamaParse, LlamaIndex’s official tool for PDF parsing, available as a managed API.LlamaHub
: A registry of hundreds of data loading libraries to ingest data from any source.Get familiar with LlamaHub loaders and LlamaParse parser for more complex data sources.
The easiest way to load data is with SimpleDirectoryReader
. It can load different types of files from a folder and turn them into Document
objects that LlamaIndex can work with.
from llama_index.core import SimpleDirectoryReader
reader = SimpleDirectoryReader(input_dir="path/to/directory")
documents = reader.load_data()
After loading our documents, we need to break them into smaller pieces called Node
objects.
A Node
is just a chunk of text from the original document that’s easier for the AI to work with, while it still has reteain references to the original Document
object.
To create these nodes, we use the IngestionPipeline
with two simple simple transformations:
SentenceSplitter
: Break the document into smaller pieces of 25 sentences eachHuggingFaceInferenceAPIEmbedding
: Turn each piece into numbers (embeddings) that the AI can understand betterThis process helps us organize our documents in a way that’s more useful for searching and analysis.
from llama_index.core import Document
from llama_index.embeddings.huggingface_api import HuggingFaceInferenceAPIEmbedding
from llama_index.core.node_parser import SentenceSplitter
from llama_index.core.ingestion import IngestionPipeline, IngestionCache
# create the pipeline with transformations
pipeline = IngestionPipeline(
transformations=[
SentenceSplitter(chunk_size=25, chunk_overlap=0),
HuggingFaceInferenceAPIEmbedding("BAAI/bge-small-en-v1.5"),
]
)
# run the pipeline
nodes = pipeline.run(documents=[Document.example()])
To save time and computer power, LlamaIndex caches the results of the ingestion pipeline so you don’t need to load and embed the same documents twice.
After creating our Node
objects we need to index them to make them searchable but before we can do that, we need a place to store our data.
Within LlamaIndex, we can use a StorageContext
to handle all the storage.
It supports various stores for different purposes:
DocumentStore
: Stores ingested documents (Node objects) for keyword searchIndexStore
: Stores index metadataVectorStore
: Stores embedding vectors for semantic searchPropertyGraphStore
: Stores knowledge graphs for graph-based queriesChatStore
: Stores and organizes chat message historyWe can set up a StorageContext
ourselves, or let LlamaIndex create one for us when creating a search index.
When we save the StorageContext
, it creates files that store all the important information about our data.
Now, let’s see how to create a VectorStoreIndex
and save it to your computer.
We also need to provide an embedding model which should be the same as the one used during ingestion.
from llama_index.core import VectorStoreIndex
from llama_index.embeddings.huggingface_api import HuggingFaceInferenceAPIEmbedding
embed_model = HuggingFaceInferenceAPIEmbedding("BAAI/bge-small-en-v1.5")
index = VectorStoreIndex.from_documents(nodes, embed_model=embed_model)
index.storage_context.persist("path/to/vector/store")
We can load our index again using files that were created when saving the StorageContext
.
from llama_index.core import StorageContext, load_index_from_storage
from llama_index.embeddings.huggingface_api import HuggingFaceInferenceAPIEmbedding
embed_model = HuggingFaceInferenceAPIEmbedding("BAAI/bge-small-en-v1.5")
storage_context = StorageContext.from_defaults(persist_dir="path/to/vector/store")
index = load_index_from_storage(storage_context, embed_model=embed_model)
Great! Now that we can save and load our index easily, let’s explore how to query it in different ways.
Before querying our index, we need to convert it to a query interface. The most common options are:
as_retriever
: For basic document retrieval, returning a list of NodeWithScore
objects with similarity scoresas_query_engine
: For single question-answer interactions, returning a written responseas_chat_engine
: For conversational interactions that maintain memory across multiple messages, returning a chat historyWe’ll focus on the query engine since it is more common for agent-like interactions. We also pass in an LLM to the query engine to use for the response.
from llama_index.llms.huggingface_api import HuggingFaceInferenceAPILM
llm = HuggingFaceInferenceAPILM(model_name="meta-llama/Meta-Llama-3-8B-Instruct")
query_engine = index.as_query_engine(llm=llm)
query_engine.query("What is the meaning of life?")
# the meaning of life is 42
Under the hood, the query engine doesn’t only use the LLM to answer the question, but also uses a ResponseSynthesizer
as strategy to process the response.
Once again, this is fully customisable but there are three main strategies that work well out of the box:
refine
: create and refine an answer by sequentially going through each retrieved text chunk. This makes a separate LLM call per Node/retrieved chunk.compact
(default): similar to refine but concatenate the chunks beforehand, resulting in less LLM calls.tree_summarize
: create a detailed answer by going through each retrieved text chunk and creating a tree structure of the answer.Take fine-grained control of your query workflows with the low-level composition API. This API lets you customize and fine-tune every step of the query process to match your exact needs.
We have seen how to use components to create a QueryEngine
. Now, let’s see how we can use that same QueryEngine
as a tool for an agent!.