In the previous Unit, we saw how tools are made available to the agent in the system prompt.
We learned that AI agents are systems that can reason, plan, and interact with their environment.
In this section, we will explore components conceptually and in the following sections we focus on implementing them.
To grasp how agents function, imagine a continuous cycle of thinking, acting, and observing. Let’s break down these actions together :
The three components work together in a continuous loop. To use an analogy from programming, the agent uses a while loop. The loop continues until the objective of the agent has been fulfilled.
Visually it looks like this:
Just like tools, it may seem a bit deceiving but most agents actually only rely on prompting to have rules in place in the framework.
In an simplified version, our system prompt now looks like this:
Let’s now break down each step of the process.
Thoughts represent the Agent’s internal reasoning and planning processes.
This utilises the agent’s Large Language Model (LLM) capacity to analyze information when presented in it’s prompt.
Think of it as the agent’s internal dialogue, where it considers the task at hand and strategizes its approach.
The Agent’s thoughts are responsible for accessing current observations and decide what the next action(s) should be.
Through this process, the agent can break down complex problems into smaller, more manageable steps, reflect on past experiences, and continuously adjust its plans based on new information.
Here are some examples of common thoughts:
Type of Thought | Example |
---|---|
Planning | “I need to break this task into three steps: 1) gather data, 2) analyze trends, 3) generate report” |
Analysis | “Based on the error message, the issue appears to be with the database connection parameters” |
Decision Making | “Given the user’s budget constraints, I should recommend the mid-tier option” |
Problem Solving | “To optimize this code, I should first profile it to identify bottlenecks” |
Memory Integration | “The user mentioned their preference for Python earlier, so I’ll provide examples in Python” |
Self-Reflection | “My last approach didn’t work well, I should try a different strategy” |
Goal Setting | “To complete this task, I need to first establish the acceptance criteria” |
Prioritization | “The security vulnerability should be addressed before adding new features” |
Note: In the case of fine-tuned models for function-calling, the thought process is optional.
In case you’re not familiar with function-calling, there will be more detail in the Action section.
This concept of thinking before Acting orginates from the ReAct prompt.
ReAct is the concatenation of “Reasoning” and “Acting”.
ReAct is a simple prompting technique that appends “Let’s think step by step” before letting the LLM decode the next tokens.
Prompting the model to think “step by step” encourages the decoding process toward next tokens that generate a plan, rather than a final solution.
After “Let’s think step by step”, the model will decompose the problem into sub-tasks.
This allows the model to consider sub-steps in more detail, which in general leads to less errors than directly generating solution.
Note : Those model have been fine-tuned to always include specific thinking ( enclosed in <think></think> special token). This is not a purely prompting technique like ReAct
Actions are the concrete steps an AI agent takes to interact with its environment.
These can range from browsing the web for information to controlling a robot in a physical space.
For instance, consider an agent assisting with customer service - it might retrieve customer data, offer support articles, or transfer issues to human representatives.
There is multiple types of Agents, that take actions differently:
Type of Agent | Description |
---|---|
JSON Agent | The Action to take is specified as in JSON format |
Code Agent | The Agents writes a code bloc that is interpreted externally |
Function-calling Agent | It is a subcategory of the JSON Agent which has been fine-tuned to generate a new message for each action |
We’ve only touched the surface on actions. In the Action section, we’ll give more information about each agent type.
Here are some examples of possible Actions:
Type of Action | Description |
---|---|
Information Gathering | Performing web searches, querying databases, retrieving documents |
Tool Usage | Making API calls, running calculations, writing and executing code |
Environment Interaction | Manipulating digital interfaces, controlling physical robots or devices |
Communication | Engaging with users through chat, collaborating with other AI agents |
One crucial part of an agent is the ability to STOP generating new tokens when an action is complete, and that is true for all formats of Agent; JSON, code, or function-calling.
The LLM only handles text, and uses it to describe the action it wants to take and the parameters to supply to the tool.
<INSERT EXAMPLE EITHER BEN ONE BUT IF WE CAN MAKE A ANIMATION OF BEN EXAMPLE THAT WOULD BE SUPER COOL>Observations are how an AI agent perceives the consequences of its actions.
They provide crucial information that fuels the agent’s thought process and guides future actions.
These observations can take many forms, from reading webpage text to monitoring a robot arm’s position. This can be seen like Tool “logs” that provide textual feedback of the Action execution.
Type of Observation | Example |
---|---|
System Feedback | Error messages, success notifications, status codes |
Data Changes | Database updates, file system modifications, state changes |
Environmental Data | Sensor readings, system metrics, resource usage |
Response Analysis | API responses, query results, computation outputs |
Time-based Events | Deadlines reached, scheduled tasks completed |
Observation are added back into the prompt by the framework.
This is why we need to stop generation of the next tokens when the model decides to take an action. If we don’t stop the generation, the LLM would hallucinate an Observation.