Let’s recap the definition of an Agent:
An Agent is a system that leverages an AI model to interact with its environment in order to achieve a user-defined objective. It combines reasoning, planning, and the execution of actions (often via external tools) to fulfill tasks.
Within LlamaIndex, we build data agents with two core components:
Defining a set of Tools is similar to defining any API interface.
LlamaIndex allows to define a Tool
as well as a ToolSpec
containing a series of customize functions under the hood.
When using an agent or LLM with function calling, the tool selected (and the arguments written for that tool) rely strongly on the tool name and description of the tools purpose and arguments.
Let’s explore the main types of tools in LlamaIndex:
FunctionTool
: Convert any Python function into a tool that an agent can use. It automatically figures out how the function works.QueryEngineTool
: A tool that lets agents use query engines. Since agents are built on query engines, they can also use other agents as tools.Toolspecs
: Tools created by the community to work with different services like Gmail.Utility Tools
: Special tools that help handle large amounts of data from other tools.We will go over each of these in detail one by one.
A function tool is a simple wrapper around any existing function. We can choose to pass a sync or async function to the tool. Additionally, we can choose to name and describe the tool as we want.
from llama_index.core.tools import FunctionTool
def get_weather(location: str) -> str:
"""Usfeful for getting the weather for a given location."""
...
tool = FunctionTool.from_defaults(
get_weather,
# async_fn=aget_weather, name="...", description="...",
)
The QueryEngine
we defined in the previous unit can be turned into a a tool using the QueryEngineTool
class.
from llama_index.core import StorageContext, load_index_from_storage
from llama_index.core.tools import QueryEngineTool
from llama_index.llms.huggingface_api import HuggingFaceInferenceAPILM
from llama_index.embeddings.huggingface_api import HuggingFaceInferenceAPIEmbedding
embed_model = HuggingFaceInferenceAPIEmbedding("BAAI/bge-small-en-v1.5")
storage_context = StorageContext.from_defaults(persist_dir="path/to/vector/store")
index = load_index_from_storage(storage_context, embed_model=embed_model)
llm = HuggingFaceInferenceAPILM(model_name="meta-llama/Meta-Llama-3-8B-Instruct")
query_engine = index.as_query_engine(llm=llm)
tool = QueryEngineTool.from_defaults(
query_engine,
# name="...", description="..."
)
Custom Tools
and ToolSpecs
are created by the community and shared on the LlamaHub.
You can think of ToolSpecs
like bundles of tools meant to be used together. Usually these cover useful tools across a single interface/service, like Gmail.
Similarly to components, these toolspecs need to be installed and follow a similar pattern.
pip install llama-index-tools-{toolspec_name}
Let’s install a toolspec that works on Google services.
pip install llama-index-tools-google
And now we can load the toolspec and convert it to a list of tools.
from llama_index.tools.google import GmailToolSpec
tool_spec = GmailToolSpec()
tool_spec_list = tool_spec.to_tool_list()
Oftentimes, directly querying an API can return a massive volume of data, which on its own may overflow the context window of the LLM (or at the very least unnecessarily increase the number of tokens that you are using). Let’s walk through our two main utility tools below.
OnDemandToalLoader
: This tool turns any existing LlamaIndex data loader ( BaseReader class) into a tool that an agent can use. The tool can be called with all the parameters needed to trigger load_data from the data loader, along with a natural language query string. During execution, we first load data from the data loader, index it (for instance with a vector store), and then query it “on-demand”. All three of these steps happen in a single tool call.LoadAndSearchToolSpec
: The LoadAndSearchToolSpec takes in any existing Tool as input. As a tool spec, it implements to_tool_list , and when that function is called, two tools are returned: a load tool and then a search tool. The load Tool execution would call the underlying Tool, and the index the output (by default with a vector index). The search Tool execution would take in a query string as input and call the underlying index.You can find other utility tools on the LlamaHub
LlamaIndex supports three main types of reasoning agents:
[!NOTE] Find more information on advanced agents on LlamaIndex GitHub
An agent is initialized from a set of Tools. Here’s an example of instantiating a ReAct agent from a set of Tools.
from llama_index.core.tools import FunctionTool
from llama_index.llms.huggingface_api import HuggingFaceInferenceAPILM
from llama_index.core.agent import ReActAgent
# define sample Tool
def multiply(a: int, b: int) -> int:
"""Multiple two integers and returns the result integer"""
return a * b
multiply_tool = FunctionTool.from_defaults(fn=multiply)
# initialize llm
llm = HuggingFaceInferenceAPILM(model_name="meta-llama/Meta-Llama-3-8B-Instruct")
# initialize ReAct agent
agent = ReActAgent.from_tools([multiply_tool], llm=llm, verbose=True)
Similarly, we can use the AgentRunner
to automatically pick the best agent reasoning flow depending on the LLM.
from llama_index.core.agent import AgentRunner
agent_runner = AgentRunner.from_llm(llm, verbose=True)
An agent supports both chat and query endpoints with query()
and chat()
respectively.
response = agent.query("What is 2 times 2?")
print(response)
response = agent.chat("What is 2 times 2?")
print(response)
Now we’ve gotten the basic, let’s take a look at how we can use tools in our agents.
It is easy to wrap QueryEngine
as tools for an agent.
When doing so, we need to define a name and description within the ToolMetadata
to improve the agent’s reasoning context.
Let’s see how to load in a QueryEngineTool
using the QueryEngine
we created in the component section.
from llama_index.core.tools import QueryEngineTool, ToolMetadata
query_engine = index.as_query_engine(similarity_top_k=3) # as shown in the previous section
query_engine_tool = QueryEngineTool(
query_engine=query_engine,
metadata=ToolMetadata(
name="a specific name",
description="a specific description",
),
return_direct=False,
)
query_engine_agent = ReActAgent.from_tools([query_engine_tool], llm=llm, verbose=True)
Agents in LlamaIndex can directly be used as tools for other agents by loading them as a QueryEngineTool
.
from llama_index.core.tools import QueryEngineTool
query_engine_agent = # as defined in the previous section
query_engine_agent_tool = QueryEngineTool(
query_engine=query_engine_agent,
metadata=ToolMetadata(
name="a specific name",
description="a specific description",
),
)
multi_agent = ReActAgent.from_tools([query_engine_agent_tool], llm=llm, verbose=True)
There is a lot more to discover about agents and tools in LlamaIndex within the Agent Guides
Now that we understand the basics of agents and tools in LlamaIndex, let’s see how they work together with workflows!
< > Update on GitHub