Welcome to the first exercise! In this exercise, we will build a simple agentic application for managing a meeting agenda. We will use simple Python functions as tools to get the current time and the agenda.
Understand the basics of AI agent applications and have fun.
For this use case, we will use two tools:
get_current_time()
: This function returns the current time.get_agenda()
: This function returns the agenda for the day.from datetime import datetime, timedelta
def get_current_time():
return datetime.now().strftime("%Y-%m-%d %H:%M:%S")
def get_agenda():
today_str = datetime.now().strftime("%Y-%m-%d")
agenda = [
{
"datetime": f"{today_str} 10:00:00",
"title": "Introduction to the course",
"description": "We will introduce the course and the project.",
},
{
"datetime": f"{today_str} 12:30:00",
"title": "Meet the student",
"description": "We will meet the student and discuss the project.",
},
{
"datetime": f"{today_str} 15:00:00",
"title": "Plan Learning Plan",
"description": "We will plan the learning plan for the student.",
},
{
"datetime": f"{today_str} 18:30:00",
"title": "Marking",
"description": "We will mark the student's work.",
},
]
agenda_str = "\n".join(
[
f"{item['datetime']} - {item['title']}: {item['description']}"
for item in agenda
]
)
return agenda_str
We will use the Hugging Face Inference Client to interact with the LLM.
from huggingface_hub import InferenceClient
client = InferenceClient(api_key="hf_xxx")
Your task is to build the messages object for the LLM in the check_agenda
function.
def check_agenda(query: str):
current_time = get_current_time()
agenda = get_agenda()
messages = [
# TODO: build the messages object for the LLM
]
completion = client.chat.completions.create(
model="meta-llama/Llama-3.2-3B-Instruct", messages=messages, max_tokens=500
)
return completion.choices[0].message.content
Below is the solution to the task.
def check_agenda(query: str):
current_time = get_current_time()
agenda = get_agenda()
# TODO: build the messages object for the LLM
messages = [
{
"role": "system",
"content": "You are a helpful agenda assistant.",
},
{"role": "tool", "content": f"Current time: {current_time}\nAgenda: {agenda}"},
{"role": "user", "content": query},
]
completion = client.chat.completions.create(
model="meta-llama/Llama-3.2-3B-Instruct", messages=messages, max_tokens=500
)
return completion.choices[0].message.content