What are Tools?

Unit 1 planning

One crucial aspect of AI Agents is their ability to take actions.

In this section, we’ll learn what Tools are, how to design them effectively, and how to integrate them into your Agent via the System Message.

By giving your Agent the right Tools—and clearly describing how those Tools work—you can dramatically increase what your AI can accomplish. Let’s dive in!

What are AI Tools?

A Tool is a function given to the LLM. This function should fulfill a clear objective

Here are some commonly used tools in AI agents :

Tool Description
Web Search Allows the agent to fetch up-to-date information from the internet.
Image Generation Creates images based on text descriptions.
Retrieval Retrieves information from an external source.
API Interface Interacts with an external API (GitHub, YouTube, Spotify, etc.).

Those are only examples, as you can in fact create a tool for any use case!

A good tool should be something that complements the power of an LLM.

For instance if you need to perform calculus, giving a calculator tool to your LLM will provide better result than relying on the native capacities of the model.

Furthermore, LLM predict the completion of a prompt based on their training data. Which means that it’s internal knowledge only cover events that happened before their training, showcasing the need to get access to up to date data through tools.

For instance, using a LLM to give you the today weather without having a web search tool, the LLM will hallucinate by giving you random weather.

Weather

How do we give tools to an LLM ?

The answer might be a bit overwhelming, but we simply provide textual descriptions of all the tools available for the model to use in the system prompt.

Here is dummy example:

System prompt for tools

The key here is to ensure the most accurate textual description of:

  1. What does this tool do?
  2. What inputs does it expect?

If that feels too theoretical, let’s understand it through a concrete example:

Disclaimer: This example implementation is fictional but closely resembles real implementations in most libraries.

class Tool:
    """
    A class representing a reusable piece of code (Tool).
    
    Attributes:
        name (str): Name of the tool.
        description (str): A textual description of what the tool does.
        func (callable): The function this tool wraps.
        arguments (list): A list of argument.
        outputs (str or list): The return type(s) of the wrapped function.
    """
    def __init__(self, 
                 name: str, 
                 description: str, 
                 func: callable, 
                 arguments: list,
                 outputs: str):
        self.name = name
        self.description = description
        self.func = func
        self.arguments = arguments
        self.outputs = outputs

    def to_string(self) -> str:
        """
        Return a string representation of the tool, 
        including its name, description, arguments, and outputs.
        """
        args_str = ", ".join([
            f"{arg_name}: {arg_type}" for arg_name, arg_type in self.arguments
        ])
        
        return (
            f"Tool Name: {self.name},"
            f" Description: {self.description},"
            f" Arguments: {args_str},"
            f" Outputs: {self.outputs}"
        )

    def __call__(self, *args, **kwargs):
        """
        Invoke the underlying function (callable) with provided arguments.
        """
        return self.func(*args, **kwargs)

This is the Python implementation of the tool we just discussed.

It may seem complicated, but it’s not. We simply define a Tool class that includes:

Now let’s create a Tool.

For those interested, here’s the code to declare a Tool using a decorator.

(Optional) The Decorator Code

decorator code
def tool(func):
    """
    A decorator that creates a Tool instance from the given function.
    """
    # Get the function signature
    signature = inspect.signature(func)
    
    # Extract (param_name, param_annotation) pairs for inputs
    arguments = []
    for param in signature.parameters.values():
        annotation_name = (
            param.annotation.__name__ 
            if hasattr(param.annotation, '__name__') 
            else str(param.annotation)
        )
        arguments.append((param.name, annotation_name))
    
    # Determine the return annotation
    return_annotation = signature.return_annotation
    if return_annotation is inspect._empty:
        outputs = "No return annotation"
    else:
        outputs = (
            return_annotation.__name__ 
            if hasattr(return_annotation, '__name__') 
            else str(return_annotation)
        )
    
    # Use the function's docstring as the description (default if None)
    description = func.__doc__ or "No description provided."
    
    # The function name becomes the Tool name
    name = func.__name__
    
    # Return a new Tool instance
    return Tool(
        name=name, 
        description=description, 
        func=func, 
        arguments=arguments, 
        outputs=outputs
    )

One of the most common tools in most demos is the calculator tool.
Here, we present a simplified version that only multiplies two integers.

Function Inputs

Returns

We have provided a textual description of this function as a docstring,
so let’s see what it looks like as a textual description!

Reminder: This textual description is what we want the LLM to know about the tool.

@tool
def calculator(a: int, b: int) -> int:
    """Multiply two integers."""
    return a * b

print(calculator.to_string())

Outputs the following text :

Tool Name: calculator, Description: Multiply two integers., Arguments: a: int, b: int, Outputs: int

Based on this Textual description, the LLM knows that it it can call the LLM calculator(a, b).

Let’s just sanity check the tool is properly working outside of an agent

result = calculator(5, 6)
print(f"Call result: {result}")

Output :

Call result: 30

The function is perfectly working and we have provided an implementation that allows the Agent to textually know what tools it has at it’s disposal.

We learn more how an Agent can now *Call this tool itself in the Actions section.

The descr is then injected in the system prompt. Taking the same dummy example, here is what it would likle after replacing the variable:

System prompt for tools

AI Tools play a crucial role in enhancing the capabilities of AI agents.

To summarize, we learned:

Now, we can move on to the Agent Workflow where you’ll see how an Agent observes, thinks, and acts bringing together everything we’ve covered so far and setting the stage for creating your own fully functional AI Agent.

< > Update on GitHub