Spaces:
Running
Running
Mandark-droid
commited on
Commit
·
d0bd9af
1
Parent(s):
9603227
Add autonomous Agent Chat screen for Track 2
Browse files- Implement smolagents CodeAgent with MCP integration
- Support 3 configurable model types (HF API, Nebius, Gemini)
- Connect to TraceMind MCP Server via SSE transport
- Load custom YAML prompt templates for TraceMind domain
- Add singleton pattern for agent reuse and proper cleanup
- Update documentation and environment configuration
- .env.example +10 -2
- README.md +55 -5
- data_loader.py +2 -2
- prompts/code_agent.yaml +100 -138
- screens/chat.py +63 -20
.env.example
CHANGED
|
@@ -1,12 +1,20 @@
|
|
| 1 |
# HuggingFace Configuration
|
| 2 |
HF_TOKEN=your_huggingface_token_here
|
| 3 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
# TraceMind MCP Server Configuration
|
| 5 |
# Use the deployed TraceMind-mcp-server endpoint
|
| 6 |
-
MCP_SERVER_URL=https://kshitijthakkar-tracemind-mcp-server.hf.space/gradio_api/mcp/
|
| 7 |
|
| 8 |
# After hackathon submission, use:
|
| 9 |
-
# MCP_SERVER_URL=https://mcp-1st-birthday-tracemind-mcp-server.hf.space/gradio_api/mcp/
|
| 10 |
|
| 11 |
# Dataset Configuration
|
| 12 |
LEADERBOARD_REPO=kshitijthakkar/smoltrace-leaderboard
|
|
|
|
| 1 |
# HuggingFace Configuration
|
| 2 |
HF_TOKEN=your_huggingface_token_here
|
| 3 |
|
| 4 |
+
# Agent Model Configuration (for Chat Screen - Track 2)
|
| 5 |
+
# Options: "hfapi" (default), "inference_client", "litellm"
|
| 6 |
+
AGENT_MODEL_TYPE=hfapi
|
| 7 |
+
|
| 8 |
+
# API Keys for different model types
|
| 9 |
+
# Required if AGENT_MODEL_TYPE=litellm
|
| 10 |
+
GEMINI_API_KEY=your_gemini_api_key_here
|
| 11 |
+
|
| 12 |
# TraceMind MCP Server Configuration
|
| 13 |
# Use the deployed TraceMind-mcp-server endpoint
|
| 14 |
+
MCP_SERVER_URL=https://kshitijthakkar-tracemind-mcp-server.hf.space/gradio_api/mcp/sse
|
| 15 |
|
| 16 |
# After hackathon submission, use:
|
| 17 |
+
# MCP_SERVER_URL=https://mcp-1st-birthday-tracemind-mcp-server.hf.space/gradio_api/mcp/sse
|
| 18 |
|
| 19 |
# Dataset Configuration
|
| 20 |
LEADERBOARD_REPO=kshitijthakkar/smoltrace-leaderboard
|
README.md
CHANGED
|
@@ -27,10 +27,12 @@ TraceMind-AI is a comprehensive platform for evaluating AI agent performance acr
|
|
| 27 |
## Features
|
| 28 |
|
| 29 |
- **📊 Real-time Leaderboard**: Live evaluation data from HuggingFace datasets
|
| 30 |
-
- **🤖
|
|
|
|
| 31 |
- **💰 Cost Estimation**: Calculate evaluation costs for different models and configurations
|
| 32 |
- **🔍 Trace Visualization**: Detailed OpenTelemetry trace analysis
|
| 33 |
- **📈 Performance Metrics**: GPU utilization, CO2 emissions, token usage tracking
|
|
|
|
| 34 |
|
| 35 |
## MCP Integration
|
| 36 |
|
|
@@ -84,8 +86,16 @@ Create a `.env` file with the following variables:
|
|
| 84 |
# HuggingFace Configuration
|
| 85 |
HF_TOKEN=your_token_here
|
| 86 |
|
| 87 |
-
#
|
| 88 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 89 |
|
| 90 |
# Dataset Configuration
|
| 91 |
LEADERBOARD_REPO=kshitijthakkar/smoltrace-leaderboard
|
|
@@ -94,6 +104,25 @@ LEADERBOARD_REPO=kshitijthakkar/smoltrace-leaderboard
|
|
| 94 |
DISABLE_OAUTH=true
|
| 95 |
```
|
| 96 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 97 |
## Data Sources
|
| 98 |
|
| 99 |
TraceMind-AI loads evaluation data from HuggingFace datasets:
|
|
@@ -164,13 +193,34 @@ insights = mcp_client.analyze_leaderboard(
|
|
| 164 |
3. View detailed OpenTelemetry trace visualization
|
| 165 |
4. Ask questions about the trace using MCP-powered analysis
|
| 166 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 167 |
## Technology Stack
|
| 168 |
|
| 169 |
- **UI Framework**: Gradio 5.49.1
|
| 170 |
-
- **
|
|
|
|
| 171 |
- **Data**: HuggingFace Datasets API
|
| 172 |
- **Authentication**: HuggingFace OAuth
|
| 173 |
-
- **AI**:
|
|
|
|
|
|
|
|
|
|
| 174 |
|
| 175 |
## Development
|
| 176 |
|
|
|
|
| 27 |
## Features
|
| 28 |
|
| 29 |
- **📊 Real-time Leaderboard**: Live evaluation data from HuggingFace datasets
|
| 30 |
+
- **🤖 Autonomous Agent Chat**: Interactive agent powered by smolagents with MCP tools (Track 2)
|
| 31 |
+
- **💬 MCP Integration**: AI-powered analysis using remote MCP servers
|
| 32 |
- **💰 Cost Estimation**: Calculate evaluation costs for different models and configurations
|
| 33 |
- **🔍 Trace Visualization**: Detailed OpenTelemetry trace analysis
|
| 34 |
- **📈 Performance Metrics**: GPU utilization, CO2 emissions, token usage tracking
|
| 35 |
+
- **🧠 Agent Reasoning**: View step-by-step agent planning and tool execution
|
| 36 |
|
| 37 |
## MCP Integration
|
| 38 |
|
|
|
|
| 86 |
# HuggingFace Configuration
|
| 87 |
HF_TOKEN=your_token_here
|
| 88 |
|
| 89 |
+
# Agent Model Configuration (for Chat Screen - Track 2)
|
| 90 |
+
# Options: "hfapi" (default), "inference_client", "litellm"
|
| 91 |
+
AGENT_MODEL_TYPE=hfapi
|
| 92 |
+
|
| 93 |
+
# API Keys for different model types
|
| 94 |
+
# Required if AGENT_MODEL_TYPE=litellm
|
| 95 |
+
GEMINI_API_KEY=your_gemini_api_key_here
|
| 96 |
+
|
| 97 |
+
# MCP Server URL (note: /sse endpoint for smolagents integration)
|
| 98 |
+
MCP_SERVER_URL=https://kshitijthakkar-tracemind-mcp-server.hf.space/gradio_api/mcp/sse
|
| 99 |
|
| 100 |
# Dataset Configuration
|
| 101 |
LEADERBOARD_REPO=kshitijthakkar/smoltrace-leaderboard
|
|
|
|
| 104 |
DISABLE_OAUTH=true
|
| 105 |
```
|
| 106 |
|
| 107 |
+
### Agent Model Options
|
| 108 |
+
|
| 109 |
+
The Agent Chat screen supports three model configurations:
|
| 110 |
+
|
| 111 |
+
1. **`hfapi` (Default)**: Uses HuggingFace Inference API
|
| 112 |
+
- Model: `Qwen/Qwen2.5-Coder-32B-Instruct`
|
| 113 |
+
- Requires: `HF_TOKEN`
|
| 114 |
+
- Best for: General use, free tier available
|
| 115 |
+
|
| 116 |
+
2. **`inference_client`**: Uses Nebius provider
|
| 117 |
+
- Model: `deepseek-ai/DeepSeek-V3-0324`
|
| 118 |
+
- Requires: `HF_TOKEN`
|
| 119 |
+
- Best for: Advanced reasoning, faster inference
|
| 120 |
+
|
| 121 |
+
3. **`litellm`**: Uses Google Gemini
|
| 122 |
+
- Model: `gemini/gemini-2.5-flash`
|
| 123 |
+
- Requires: `GEMINI_API_KEY`
|
| 124 |
+
- Best for: Gemini-specific features
|
| 125 |
+
|
| 126 |
## Data Sources
|
| 127 |
|
| 128 |
TraceMind-AI loads evaluation data from HuggingFace datasets:
|
|
|
|
| 193 |
3. View detailed OpenTelemetry trace visualization
|
| 194 |
4. Ask questions about the trace using MCP-powered analysis
|
| 195 |
|
| 196 |
+
### Using the Agent Chat (Track 2)
|
| 197 |
+
|
| 198 |
+
1. Navigate to the "🤖 Agent Chat" tab
|
| 199 |
+
2. The autonomous agent will initialize with MCP tools from TraceMind MCP Server
|
| 200 |
+
3. Ask questions about agent evaluations:
|
| 201 |
+
- "What are the top 3 performing models and their costs?"
|
| 202 |
+
- "Estimate the cost of running 500 tests with DeepSeek-V3 on H200"
|
| 203 |
+
- "Load the leaderboard and show me the last 5 run IDs"
|
| 204 |
+
4. Watch the agent plan, execute tools, and provide detailed answers
|
| 205 |
+
5. Enable "Show Agent Reasoning" to see step-by-step tool execution
|
| 206 |
+
6. Use Quick Action buttons for common queries
|
| 207 |
+
|
| 208 |
+
**Example Questions:**
|
| 209 |
+
- Analysis: "Analyze the current leaderboard and show me the top performing models with their costs"
|
| 210 |
+
- Cost Comparison: "Compare the costs of the top 3 models - which one offers the best value?"
|
| 211 |
+
- Recommendations: "Based on the leaderboard data, which model would you recommend for a production system?"
|
| 212 |
+
|
| 213 |
## Technology Stack
|
| 214 |
|
| 215 |
- **UI Framework**: Gradio 5.49.1
|
| 216 |
+
- **Agent Framework**: smolagents 1.22.0+ (Track 2)
|
| 217 |
+
- **MCP Protocol**: MCP integration via Gradio & smolagents MCPClient
|
| 218 |
- **Data**: HuggingFace Datasets API
|
| 219 |
- **Authentication**: HuggingFace OAuth
|
| 220 |
+
- **AI Models**:
|
| 221 |
+
- Default: Qwen/Qwen2.5-Coder-32B-Instruct (HF Inference API)
|
| 222 |
+
- Optional: DeepSeek-V3 (Nebius), Gemini 2.5 Flash
|
| 223 |
+
- MCP Server: Google Gemini 2.5 Pro
|
| 224 |
|
| 225 |
## Development
|
| 226 |
|
data_loader.py
CHANGED
|
@@ -18,7 +18,7 @@ DataSource = Literal["json", "huggingface", "both"]
|
|
| 18 |
|
| 19 |
class DataLoader:
|
| 20 |
"""
|
| 21 |
-
Unified data loader for
|
| 22 |
|
| 23 |
Supports:
|
| 24 |
- Local JSON files
|
|
@@ -36,7 +36,7 @@ class DataLoader:
|
|
| 36 |
):
|
| 37 |
self.data_source = data_source
|
| 38 |
self.json_data_path = Path(json_data_path or os.getenv("JSON_DATA_PATH", "./sample_data"))
|
| 39 |
-
self.leaderboard_dataset = leaderboard_dataset or os.getenv("LEADERBOARD_DATASET", "
|
| 40 |
self.hf_token = hf_token or os.getenv("HF_TOKEN")
|
| 41 |
|
| 42 |
# Cache
|
|
|
|
| 18 |
|
| 19 |
class DataLoader:
|
| 20 |
"""
|
| 21 |
+
Unified data loader for TraceMind
|
| 22 |
|
| 23 |
Supports:
|
| 24 |
- Local JSON files
|
|
|
|
| 36 |
):
|
| 37 |
self.data_source = data_source
|
| 38 |
self.json_data_path = Path(json_data_path or os.getenv("JSON_DATA_PATH", "./sample_data"))
|
| 39 |
+
self.leaderboard_dataset = leaderboard_dataset or os.getenv("LEADERBOARD_DATASET", "kshitijthakkar/smoltrace-leaderboard")
|
| 40 |
self.hf_token = hf_token or os.getenv("HF_TOKEN")
|
| 41 |
|
| 42 |
# Cache
|
prompts/code_agent.yaml
CHANGED
|
@@ -1,147 +1,105 @@
|
|
| 1 |
system_prompt: |-
|
| 2 |
-
You are an expert assistant
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
To solve the task, you must plan forward to proceed in a series of steps, in a cycle of Thought, Code, and Observation sequences.
|
| 5 |
|
| 6 |
At each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task and the tools that you want to use.
|
| 7 |
-
Then in the Code sequence you should write the code in simple Python. The code sequence must be opened with '
|
| 8 |
During each intermediate step, you can use 'print()' to save whatever important information you will then need.
|
| 9 |
These print outputs will then appear in the 'Observation:' field, which will be available as input for the next step.
|
| 10 |
In the end you have to return a final answer using the `final_answer` tool.
|
| 11 |
|
| 12 |
-
Here are a few examples
|
| 13 |
---
|
| 14 |
-
Task: "
|
| 15 |
-
|
| 16 |
-
Thought: I will
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
---
|
| 30 |
-
Task: "
|
| 31 |
-
|
| 32 |
-
Thought: I will use
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
---
|
| 39 |
-
Task:
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
In a 1979 interview, Stanislaus Ulam discusses with Martin Sherwin about other great physicists of his time, including Oppenheimer.
|
| 55 |
-
What does he say was the consequence of Einstein learning too much math on his creativity, in one word?
|
| 56 |
-
|
| 57 |
-
Thought: I need to find and read the 1979 interview of Stanislaus Ulam with Martin Sherwin.
|
| 58 |
-
{{code_block_opening_tag}}
|
| 59 |
-
pages = web_search(query="1979 interview Stanislaus Ulam Martin Sherwin physicists Einstein")
|
| 60 |
-
print(pages)
|
| 61 |
-
{{code_block_closing_tag}}
|
| 62 |
-
Observation:
|
| 63 |
-
No result found for query "1979 interview Stanislaus Ulam Martin Sherwin physicists Einstein".
|
| 64 |
-
|
| 65 |
-
Thought: The query was maybe too restrictive and did not find any results. Let's try again with a broader query.
|
| 66 |
-
{{code_block_opening_tag}}
|
| 67 |
-
pages = web_search(query="1979 interview Stanislaus Ulam")
|
| 68 |
-
print(pages)
|
| 69 |
-
{{code_block_closing_tag}}
|
| 70 |
Observation:
|
| 71 |
-
|
| 72 |
-
[Stanislaus Ulam 1979 interview](https://ahf.nuclearmuseum.org/voices/oral-histories/stanislaus-ulams-interview-1979/)
|
| 73 |
-
|
| 74 |
-
[Ulam discusses Manhattan Project](https://ahf.nuclearmuseum.org/manhattan-project/ulam-manhattan-project/)
|
| 75 |
-
|
| 76 |
-
(truncated)
|
| 77 |
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
{{code_block_closing_tag}}
|
| 85 |
-
Observation:
|
| 86 |
-
Manhattan Project Locations:
|
| 87 |
-
Los Alamos, NM
|
| 88 |
-
Stanislaus Ulam was a Polish-American mathematician. He worked on the Manhattan Project at Los Alamos and later helped design the hydrogen bomb. In this interview, he discusses his work at
|
| 89 |
-
(truncated)
|
| 90 |
-
|
| 91 |
-
Thought: I now have the final answer: from the webpages visited, Stanislaus Ulam says of Einstein: "He learned too much mathematics and sort of diminished, it seems to me personally, it seems to me his purely physics creativity." Let's answer in one word.
|
| 92 |
-
{{code_block_opening_tag}}
|
| 93 |
-
final_answer("diminished")
|
| 94 |
-
{{code_block_closing_tag}}
|
| 95 |
-
|
| 96 |
-
---
|
| 97 |
-
Task: "Which city has the highest population: Guangzhou or Shanghai?"
|
| 98 |
-
|
| 99 |
-
Thought: I need to get the populations for both cities and compare them: I will use the tool `web_search` to get the population of both cities.
|
| 100 |
-
{{code_block_opening_tag}}
|
| 101 |
-
for city in ["Guangzhou", "Shanghai"]:
|
| 102 |
-
print(f"Population {city}:", web_search(f"{city} population"))
|
| 103 |
-
{{code_block_closing_tag}}
|
| 104 |
-
Observation:
|
| 105 |
-
Population Guangzhou: ['Guangzhou has a population of 15 million inhabitants as of 2021.']
|
| 106 |
-
Population Shanghai: '26 million (2019)'
|
| 107 |
-
|
| 108 |
-
Thought: Now I know that Shanghai has the highest population.
|
| 109 |
-
{{code_block_opening_tag}}
|
| 110 |
-
final_answer("Shanghai")
|
| 111 |
-
{{code_block_closing_tag}}
|
| 112 |
-
|
| 113 |
-
---
|
| 114 |
-
Task: "What is the current age of the pope, raised to the power 0.36?"
|
| 115 |
-
|
| 116 |
-
Thought: I will use the tool `wikipedia_search` to get the age of the pope, and confirm that with a web search.
|
| 117 |
-
{{code_block_opening_tag}}
|
| 118 |
-
pope_age_wiki = wikipedia_search(query="current pope age")
|
| 119 |
-
print("Pope age as per wikipedia:", pope_age_wiki)
|
| 120 |
-
pope_age_search = web_search(query="current pope age")
|
| 121 |
-
print("Pope age as per google search:", pope_age_search)
|
| 122 |
-
{{code_block_closing_tag}}
|
| 123 |
-
Observation:
|
| 124 |
-
Pope age: "The pope Francis is currently 88 years old."
|
| 125 |
|
| 126 |
-
Thought: I
|
| 127 |
-
|
| 128 |
-
|
| 129 |
-
|
| 130 |
-
{{code_block_closing_tag}}
|
| 131 |
|
| 132 |
Above examples were using notional tools that might not exist for you. On top of performing computations in the Python code snippets that you create, you only have access to these tools, behaving like regular python functions:
|
| 133 |
-
|
| 134 |
{%- for tool in tools.values() %}
|
| 135 |
{{ tool.to_code_prompt() }}
|
| 136 |
{% endfor %}
|
| 137 |
-
|
| 138 |
|
| 139 |
{%- if managed_agents and managed_agents.values() | list %}
|
| 140 |
You can also give tasks to team members.
|
| 141 |
Calling a team member works similarly to calling a tool: provide the task description as the 'task' argument. Since this team member is a real human, be as detailed and verbose as necessary in your task description.
|
| 142 |
You can also include any relevant variables or context using the 'additional_args' argument.
|
| 143 |
Here is a list of the team members that you can call:
|
| 144 |
-
|
| 145 |
{%- for agent in managed_agents.values() %}
|
| 146 |
def {{ agent.name }}(task: str, additional_args: dict[str, Any]) -> str:
|
| 147 |
"""{{ agent.description }}
|
|
@@ -151,21 +109,22 @@ system_prompt: |-
|
|
| 151 |
additional_args: Dictionary of extra inputs to pass to the managed agent, e.g. images, dataframes, or any other contextual data it may need.
|
| 152 |
"""
|
| 153 |
{% endfor %}
|
| 154 |
-
|
| 155 |
{%- endif %}
|
| 156 |
|
| 157 |
Here are the rules you should always follow to solve your task:
|
| 158 |
-
1. Always provide a 'Thought:' sequence, and a '
|
| 159 |
2. Use only variables that you have defined!
|
| 160 |
-
3. Always use the right arguments for the tools. DO NOT pass the arguments as a dict as in 'answer =
|
| 161 |
-
4. For tools
|
| 162 |
-
5.
|
| 163 |
-
6.
|
| 164 |
-
7.
|
| 165 |
-
8.
|
| 166 |
-
9.
|
| 167 |
-
10.
|
| 168 |
-
11.
|
|
|
|
| 169 |
|
| 170 |
{%- if custom_instructions %}
|
| 171 |
{{custom_instructions}}
|
|
@@ -174,27 +133,30 @@ system_prompt: |-
|
|
| 174 |
Now Begin!
|
| 175 |
planning:
|
| 176 |
initial_plan : |-
|
| 177 |
-
You are a world expert at analyzing
|
| 178 |
-
Below I will present you a task. You will need to 1. build a survey of facts known or needed to solve the task, then 2. make a plan of action to solve the task.
|
| 179 |
|
| 180 |
## 1. Facts survey
|
| 181 |
You will build a comprehensive preparatory survey of which facts we have at our disposal and which ones we still need.
|
| 182 |
-
These "facts" will typically be specific names,
|
| 183 |
### 1.1. Facts given in the task
|
| 184 |
List here the specific facts given in the task that could help you (there might be nothing here).
|
| 185 |
|
| 186 |
### 1.2. Facts to look up
|
| 187 |
-
List here any facts that we may need to look up
|
| 188 |
-
|
|
|
|
|
|
|
|
|
|
| 189 |
|
| 190 |
### 1.3. Facts to derive
|
| 191 |
-
List here anything that we want to derive from the above by logical reasoning,
|
| 192 |
|
| 193 |
Don't make any assumptions. For each item, provide a thorough reasoning. Do not add anything else on top of three headings above.
|
| 194 |
|
| 195 |
## 2. Plan
|
| 196 |
Then for the given task, develop a step-by-step high-level plan taking into account the above inputs and list of facts.
|
| 197 |
-
This plan should involve individual tasks based on the available tools, that if executed correctly will yield the correct answer.
|
| 198 |
Do not skip steps, do not add any superfluous steps. Only write the high-level plan, DO NOT DETAIL INDIVIDUAL TOOL CALLS.
|
| 199 |
After writing the final step of the plan, write the '<end_plan>' tag and stop there.
|
| 200 |
|
|
@@ -230,7 +192,7 @@ planning:
|
|
| 230 |
```
|
| 231 |
First in part 1, write the facts survey, then in part 2, write your plan.
|
| 232 |
update_plan_pre_messages: |-
|
| 233 |
-
You are a world expert at analyzing
|
| 234 |
You have been given the following task:
|
| 235 |
```
|
| 236 |
{{task}}
|
|
@@ -252,9 +214,9 @@ planning:
|
|
| 252 |
|
| 253 |
Then write a step-by-step high-level plan to solve the task above.
|
| 254 |
## 2. Plan
|
| 255 |
-
### 2.
|
| 256 |
Etc.
|
| 257 |
-
This plan should involve individual tasks based on the available tools, that if executed correctly will yield the correct answer.
|
| 258 |
Beware that you have {remaining_steps} steps remaining.
|
| 259 |
Do not skip steps, do not add any superfluous steps. Only write the high-level plan, DO NOT DETAIL INDIVIDUAL TOOL CALLS.
|
| 260 |
After writing the final step of the plan, write the '<end_plan>' tag and stop there.
|
|
|
|
| 1 |
system_prompt: |-
|
| 2 |
+
You are TraceMind Agent, an expert AI assistant specialized in analyzing agent evaluation data and providing insights about model performance, costs, and optimization.
|
| 3 |
+
|
| 4 |
+
You have access to powerful MCP (Model Context Protocol) tools that connect to the TraceMind MCP Server to analyze:
|
| 5 |
+
- **Leaderboard Data**: Agent evaluation results across different models
|
| 6 |
+
- **Trace Data**: OpenTelemetry traces showing step-by-step agent execution
|
| 7 |
+
- **Cost Estimates**: Predictions for running evaluations with different configurations
|
| 8 |
+
- **Dataset Access**: Raw access to smoltrace-* datasets on HuggingFace
|
| 9 |
+
|
| 10 |
+
You will be given a task to solve as best you can using these tools and Python code.
|
| 11 |
To solve the task, you must plan forward to proceed in a series of steps, in a cycle of Thought, Code, and Observation sequences.
|
| 12 |
|
| 13 |
At each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task and the tools that you want to use.
|
| 14 |
+
Then in the Code sequence you should write the code in simple Python. The code sequence must be opened with '```python', and closed with '```'.
|
| 15 |
During each intermediate step, you can use 'print()' to save whatever important information you will then need.
|
| 16 |
These print outputs will then appear in the 'Observation:' field, which will be available as input for the next step.
|
| 17 |
In the end you have to return a final answer using the `final_answer` tool.
|
| 18 |
|
| 19 |
+
Here are a few examples specific to TraceMind:
|
| 20 |
---
|
| 21 |
+
Task: "What are the top 3 performing models on the leaderboard and how much do they cost?"
|
| 22 |
+
|
| 23 |
+
Thought: I will use the `run_analyze_leaderboard` tool to get insights from the leaderboard data, focusing on overall performance.
|
| 24 |
+
```python
|
| 25 |
+
leaderboard_analysis = run_analyze_leaderboard(
|
| 26 |
+
repo="kshitijthakkar/smoltrace-leaderboard",
|
| 27 |
+
metric="overall",
|
| 28 |
+
time_range="last_week",
|
| 29 |
+
top_n=3
|
| 30 |
+
)
|
| 31 |
+
print(leaderboard_analysis)
|
| 32 |
+
```
|
| 33 |
+
Observation: "Based on 247 evaluations:\nTop Performers:\n- GPT-4: 95.8% accuracy, $0.05/run\n- Claude-3-Opus: 94.2% accuracy, $0.04/run\n- Llama-3.1-70B: 93.4% accuracy, $0.002/run"
|
| 34 |
+
|
| 35 |
+
Thought: I now have the top 3 models with their performance and costs. Let me format this as the final answer.
|
| 36 |
+
```python
|
| 37 |
+
final_answer("""The top 3 performing models are:
|
| 38 |
+
1. **GPT-4**: 95.8% success rate at $0.05 per run
|
| 39 |
+
2. **Claude-3-Opus**: 94.2% success rate at $0.04 per run
|
| 40 |
+
3. **Llama-3.1-70B**: 93.4% success rate at $0.002 per run (most cost-effective!)
|
| 41 |
+
""")
|
| 42 |
+
```
|
| 43 |
|
| 44 |
---
|
| 45 |
+
Task: "Estimate the cost of running 500 tests with DeepSeek-V3 on H200 GPU"
|
| 46 |
+
|
| 47 |
+
Thought: I will use the run_estimate_cost tool to predict the cost, duration, and CO2 emissions for this evaluation configuration.
|
| 48 |
+
```python
|
| 49 |
+
cost_estimate = run_estimate_cost(
|
| 50 |
+
model="deepseek-ai/DeepSeek-V3",
|
| 51 |
+
agent_type="both",
|
| 52 |
+
num_tests=500,
|
| 53 |
+
hardware="gpu_h200"
|
| 54 |
+
)
|
| 55 |
+
print(cost_estimate)
|
| 56 |
+
final_answer(cost_estimate)
|
| 57 |
+
```
|
| 58 |
|
| 59 |
---
|
| 60 |
+
Task: "Load the leaderboard dataset and show me the last 5 run IDs"
|
| 61 |
+
|
| 62 |
+
Thought: I will use the run_get_dataset tool to load the smoltrace-leaderboard dataset as JSON.
|
| 63 |
+
```python
|
| 64 |
+
import json
|
| 65 |
+
dataset_json = run_get_dataset(
|
| 66 |
+
repo="kshitijthakkar/smoltrace-leaderboard",
|
| 67 |
+
max_rows=5
|
| 68 |
+
)
|
| 69 |
+
data = json.loads(dataset_json)
|
| 70 |
+
print(f"Total rows in dataset: {data['total_rows']}")
|
| 71 |
+
print(f"\nLast 5 run IDs:")
|
| 72 |
+
for row in data['data'][:5]:
|
| 73 |
+
print(f" - {row['run_id']} ({row['model']})")
|
| 74 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 75 |
Observation:
|
| 76 |
+
Total rows in dataset: 247
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 77 |
|
| 78 |
+
Last 5 run IDs:
|
| 79 |
+
- run_abc123 (openai/gpt-4)
|
| 80 |
+
- run_def456 (anthropic/claude-3-opus)
|
| 81 |
+
- run_ghi789 (meta-llama/Llama-3.1-70B)
|
| 82 |
+
- run_jkl012 (google/gemini-pro)
|
| 83 |
+
- run_mno345 (deepseek-ai/deepseek-coder)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 84 |
|
| 85 |
+
Thought: I have successfully loaded the dataset and extracted the run IDs. Let me provide the final answer.
|
| 86 |
+
```python
|
| 87 |
+
final_answer("The last 5 run IDs from the leaderboard are: run_abc123 (GPT-4), run_def456 (Claude-3-Opus), run_ghi789 (Llama-3.1-70B), run_jkl012 (Gemini-Pro), and run_mno345 (DeepSeek-Coder)")
|
| 88 |
+
```
|
|
|
|
| 89 |
|
| 90 |
Above examples were using notional tools that might not exist for you. On top of performing computations in the Python code snippets that you create, you only have access to these tools, behaving like regular python functions:
|
| 91 |
+
```python
|
| 92 |
{%- for tool in tools.values() %}
|
| 93 |
{{ tool.to_code_prompt() }}
|
| 94 |
{% endfor %}
|
| 95 |
+
```
|
| 96 |
|
| 97 |
{%- if managed_agents and managed_agents.values() | list %}
|
| 98 |
You can also give tasks to team members.
|
| 99 |
Calling a team member works similarly to calling a tool: provide the task description as the 'task' argument. Since this team member is a real human, be as detailed and verbose as necessary in your task description.
|
| 100 |
You can also include any relevant variables or context using the 'additional_args' argument.
|
| 101 |
Here is a list of the team members that you can call:
|
| 102 |
+
```python
|
| 103 |
{%- for agent in managed_agents.values() %}
|
| 104 |
def {{ agent.name }}(task: str, additional_args: dict[str, Any]) -> str:
|
| 105 |
"""{{ agent.description }}
|
|
|
|
| 109 |
additional_args: Dictionary of extra inputs to pass to the managed agent, e.g. images, dataframes, or any other contextual data it may need.
|
| 110 |
"""
|
| 111 |
{% endfor %}
|
| 112 |
+
```
|
| 113 |
{%- endif %}
|
| 114 |
|
| 115 |
Here are the rules you should always follow to solve your task:
|
| 116 |
+
1. Always provide a 'Thought:' sequence, and a '```python' sequence ending with '```', else you will fail.
|
| 117 |
2. Use only variables that you have defined!
|
| 118 |
+
3. Always use the right arguments for the tools. DO NOT pass the arguments as a dict as in 'answer = run_analyze_leaderboard({'repo': "kshitijthakkar/smoltrace-leaderboard"})', but use the arguments directly as in 'answer = run_analyze_leaderboard(repo="kshitijthakkar/smoltrace-leaderboard")'.
|
| 119 |
+
4. For MCP tools (run_analyze_leaderboard, run_debug_trace, run_estimate_cost, run_compare_runs, run_get_dataset): These return structured Markdown or JSON outputs that you can confidently use in your analysis. Feel free to chain multiple calls when needed.
|
| 120 |
+
5. Call a tool only when needed, and never re-do a tool call that you previously did with the exact same parameters.
|
| 121 |
+
6. Don't name any new variable with the same name as a tool: for instance don't name a variable 'final_answer'.
|
| 122 |
+
7. Never create any notional variables in our code, as having these in your logs will derail you from the true variables.
|
| 123 |
+
8. You can use imports in your code, but only from the following list of modules: {{authorized_imports}}
|
| 124 |
+
9. The state persists between code executions: so if in one step you've created variables or imported modules, these will all persist.
|
| 125 |
+
10. Don't give up! You're in charge of solving the task, not providing directions to solve it.
|
| 126 |
+
11. When analyzing costs, always consider both API costs (for models like GPT-4) and GPU compute costs (for local models on HF Jobs).
|
| 127 |
+
12. When comparing models, consider multiple dimensions: accuracy, cost, speed, CO2 emissions, and use case requirements.
|
| 128 |
|
| 129 |
{%- if custom_instructions %}
|
| 130 |
{{custom_instructions}}
|
|
|
|
| 133 |
Now Begin!
|
| 134 |
planning:
|
| 135 |
initial_plan : |-
|
| 136 |
+
You are a world expert at analyzing agent evaluation data to derive insights and plan accordingly towards solving a task.
|
| 137 |
+
Below I will present you a task related to agent evaluation analysis. You will need to 1. build a survey of facts known or needed to solve the task, then 2. make a plan of action to solve the task.
|
| 138 |
|
| 139 |
## 1. Facts survey
|
| 140 |
You will build a comprehensive preparatory survey of which facts we have at our disposal and which ones we still need.
|
| 141 |
+
These "facts" will typically be specific model names, run IDs, metrics, costs, etc. Your answer should use the below headings:
|
| 142 |
### 1.1. Facts given in the task
|
| 143 |
List here the specific facts given in the task that could help you (there might be nothing here).
|
| 144 |
|
| 145 |
### 1.2. Facts to look up
|
| 146 |
+
List here any facts that we may need to look up from the MCP tools:
|
| 147 |
+
- Leaderboard data (via run_analyze_leaderboard or run_get_dataset)
|
| 148 |
+
- Trace data (via run_debug_trace)
|
| 149 |
+
- Cost estimates (via run_estimate_cost)
|
| 150 |
+
- Dataset contents (via run_get_dataset)
|
| 151 |
|
| 152 |
### 1.3. Facts to derive
|
| 153 |
+
List here anything that we want to derive from the above by logical reasoning, computation, or comparison.
|
| 154 |
|
| 155 |
Don't make any assumptions. For each item, provide a thorough reasoning. Do not add anything else on top of three headings above.
|
| 156 |
|
| 157 |
## 2. Plan
|
| 158 |
Then for the given task, develop a step-by-step high-level plan taking into account the above inputs and list of facts.
|
| 159 |
+
This plan should involve individual tasks based on the available MCP tools, that if executed correctly will yield the correct answer.
|
| 160 |
Do not skip steps, do not add any superfluous steps. Only write the high-level plan, DO NOT DETAIL INDIVIDUAL TOOL CALLS.
|
| 161 |
After writing the final step of the plan, write the '<end_plan>' tag and stop there.
|
| 162 |
|
|
|
|
| 192 |
```
|
| 193 |
First in part 1, write the facts survey, then in part 2, write your plan.
|
| 194 |
update_plan_pre_messages: |-
|
| 195 |
+
You are a world expert at analyzing agent evaluation data and planning accordingly towards solving a task.
|
| 196 |
You have been given the following task:
|
| 197 |
```
|
| 198 |
{{task}}
|
|
|
|
| 214 |
|
| 215 |
Then write a step-by-step high-level plan to solve the task above.
|
| 216 |
## 2. Plan
|
| 217 |
+
### 2.1. ...
|
| 218 |
Etc.
|
| 219 |
+
This plan should involve individual tasks based on the available MCP tools, that if executed correctly will yield the correct answer.
|
| 220 |
Beware that you have {remaining_steps} steps remaining.
|
| 221 |
Do not skip steps, do not add any superfluous steps. Only write the high-level plan, DO NOT DETAIL INDIVIDUAL TOOL CALLS.
|
| 222 |
After writing the final step of the plan, write the '<end_plan>' tag and stop there.
|
screens/chat.py
CHANGED
|
@@ -8,10 +8,11 @@ import gradio as gr
|
|
| 8 |
from typing import List, Tuple, Dict, Any
|
| 9 |
import json
|
| 10 |
import os
|
|
|
|
| 11 |
|
| 12 |
# Smolagents imports
|
| 13 |
try:
|
| 14 |
-
from smolagents import CodeAgent,
|
| 15 |
from smolagents.mcp_client import MCPClient
|
| 16 |
SMOLAGENTS_AVAILABLE = True
|
| 17 |
except ImportError:
|
|
@@ -26,18 +27,38 @@ MODEL_TYPE = os.getenv("AGENT_MODEL_TYPE", "hfapi") # Options: "hfapi", "infere
|
|
| 26 |
HF_TOKEN = os.getenv("HF_TOKEN", "")
|
| 27 |
GEMINI_API_KEY = os.getenv("GEMINI_API_KEY", "")
|
| 28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
def create_agent():
|
| 31 |
-
"""Create smolagents agent with MCP server tools"""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
if not SMOLAGENTS_AVAILABLE:
|
| 33 |
return None
|
| 34 |
|
| 35 |
try:
|
| 36 |
-
# Connect to TraceMind MCP Server
|
| 37 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
|
| 39 |
-
|
| 40 |
-
tools = mcp_client.get_tools()
|
| 41 |
|
| 42 |
# Log available tools
|
| 43 |
tools_array = [{
|
|
@@ -69,33 +90,37 @@ def create_agent():
|
|
| 69 |
)
|
| 70 |
print(f"Using LiteLLMModel: gemini/gemini-2.5-flash")
|
| 71 |
|
| 72 |
-
else: # Default: hfapi
|
| 73 |
-
#
|
| 74 |
-
model =
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
model_id='Qwen/Qwen2.5-Coder-32B-Instruct',
|
| 78 |
-
custom_role_conversions=None,
|
| 79 |
)
|
| 80 |
-
print(f"Using
|
| 81 |
|
| 82 |
-
#
|
| 83 |
prompt_template_path = os.path.join(os.path.dirname(__file__), "../prompts/code_agent.yaml")
|
|
|
|
|
|
|
| 84 |
|
| 85 |
# Create CodeAgent with MCP server tools and YAML prompt template
|
| 86 |
agent = CodeAgent(
|
| 87 |
tools=[*tools],
|
| 88 |
model=model,
|
| 89 |
-
prompt_templates=
|
| 90 |
max_steps=10,
|
| 91 |
planning_interval=5,
|
| 92 |
additional_authorized_imports=[
|
| 93 |
'time', 'math', 'queue', 're', 'stat', 'collections', 'datetime',
|
| 94 |
-
'statistics', 'itertools', 'unicodedata', 'random',
|
| 95 |
-
'pandas', 'numpy', 'json', 'yaml', 'plotly'
|
| 96 |
]
|
| 97 |
)
|
| 98 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 99 |
return agent
|
| 100 |
|
| 101 |
except Exception as e:
|
|
@@ -105,6 +130,22 @@ def create_agent():
|
|
| 105 |
return None
|
| 106 |
|
| 107 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 108 |
def chat_with_agent(
|
| 109 |
message: str,
|
| 110 |
history: List[Tuple[str, str]],
|
|
@@ -167,7 +208,7 @@ def create_chat_ui():
|
|
| 167 |
gr.Markdown("*Autonomous AI agent powered by smolagents with MCP tools*")
|
| 168 |
|
| 169 |
# Info banner
|
| 170 |
-
with gr.Accordion("💡 About This Agent", open=
|
| 171 |
gr.Markdown("""
|
| 172 |
### 🎯 What is this?
|
| 173 |
This is an **autonomous AI agent** that can:
|
|
@@ -252,7 +293,9 @@ def on_send_message(message, history, show_reasoning):
|
|
| 252 |
|
| 253 |
|
| 254 |
def on_clear_chat():
|
| 255 |
-
"""Handle clear button click"""
|
|
|
|
|
|
|
| 256 |
return [], "", "*Agent's reasoning steps will appear here...*"
|
| 257 |
|
| 258 |
|
|
|
|
| 8 |
from typing import List, Tuple, Dict, Any
|
| 9 |
import json
|
| 10 |
import os
|
| 11 |
+
import yaml
|
| 12 |
|
| 13 |
# Smolagents imports
|
| 14 |
try:
|
| 15 |
+
from smolagents import CodeAgent, InferenceClientModel, LiteLLMModel
|
| 16 |
from smolagents.mcp_client import MCPClient
|
| 17 |
SMOLAGENTS_AVAILABLE = True
|
| 18 |
except ImportError:
|
|
|
|
| 27 |
HF_TOKEN = os.getenv("HF_TOKEN", "")
|
| 28 |
GEMINI_API_KEY = os.getenv("GEMINI_API_KEY", "")
|
| 29 |
|
| 30 |
+
# Global agent and MCP client (reused across requests)
|
| 31 |
+
_global_agent = None
|
| 32 |
+
_global_mcp_client = None
|
| 33 |
+
|
| 34 |
|
| 35 |
def create_agent():
|
| 36 |
+
"""Create smolagents agent with MCP server tools (singleton pattern)"""
|
| 37 |
+
global _global_agent, _global_mcp_client
|
| 38 |
+
|
| 39 |
+
# Return existing agent if already created
|
| 40 |
+
if _global_agent is not None:
|
| 41 |
+
return _global_agent
|
| 42 |
+
|
| 43 |
if not SMOLAGENTS_AVAILABLE:
|
| 44 |
return None
|
| 45 |
|
| 46 |
try:
|
| 47 |
+
# Connect to TraceMind MCP Server using SSE transport
|
| 48 |
+
print(f"Connecting to TraceMind MCP Server at {MCP_SERVER_URL}...")
|
| 49 |
+
print(f"Using SSE transport for Gradio MCP server...")
|
| 50 |
+
|
| 51 |
+
# For Gradio MCP servers, must specify transport: "sse"
|
| 52 |
+
# See: https://huggingface.co/learn/mcp-course/unit2/gradio-client
|
| 53 |
+
_global_mcp_client = MCPClient(
|
| 54 |
+
{"url": MCP_SERVER_URL, "transport": "sse"}
|
| 55 |
+
)
|
| 56 |
+
|
| 57 |
+
# Get tools from MCP server (MCPClient.get_tools() doesn't support structured_output parameter)
|
| 58 |
+
print("Fetching tools from MCP server...")
|
| 59 |
+
tools = _global_mcp_client.get_tools()
|
| 60 |
|
| 61 |
+
print(f"Received {len(tools)} tools from MCP server")
|
|
|
|
| 62 |
|
| 63 |
# Log available tools
|
| 64 |
tools_array = [{
|
|
|
|
| 90 |
)
|
| 91 |
print(f"Using LiteLLMModel: gemini/gemini-2.5-flash")
|
| 92 |
|
| 93 |
+
else: # Default: hfapi (using InferenceClientModel)
|
| 94 |
+
# InferenceClientModel with Qwen (HF Inference API)
|
| 95 |
+
model = InferenceClientModel(
|
| 96 |
+
model_id='Qwen/Qwen3-Coder-480B-A35B-Instruct',
|
| 97 |
+
token=HF_TOKEN if HF_TOKEN else None,
|
|
|
|
|
|
|
| 98 |
)
|
| 99 |
+
print(f"Using InferenceClientModel: Qwen/Qwen3-Coder-480B-A35B-Instruct (HF Inference API)")
|
| 100 |
|
| 101 |
+
# Load prompt templates from YAML file
|
| 102 |
prompt_template_path = os.path.join(os.path.dirname(__file__), "../prompts/code_agent.yaml")
|
| 103 |
+
with open(prompt_template_path, 'r', encoding='utf-8') as stream:
|
| 104 |
+
prompt_templates = yaml.safe_load(stream)
|
| 105 |
|
| 106 |
# Create CodeAgent with MCP server tools and YAML prompt template
|
| 107 |
agent = CodeAgent(
|
| 108 |
tools=[*tools],
|
| 109 |
model=model,
|
| 110 |
+
prompt_templates=prompt_templates,
|
| 111 |
max_steps=10,
|
| 112 |
planning_interval=5,
|
| 113 |
additional_authorized_imports=[
|
| 114 |
'time', 'math', 'queue', 're', 'stat', 'collections', 'datetime',
|
| 115 |
+
'statistics', 'itertools', 'unicodedata', 'random',
|
| 116 |
+
'pandas', 'numpy', 'json', 'yaml', 'plotly'
|
| 117 |
]
|
| 118 |
)
|
| 119 |
|
| 120 |
+
# Store agent globally for reuse
|
| 121 |
+
_global_agent = agent
|
| 122 |
+
print("✅ Agent created successfully and cached for reuse")
|
| 123 |
+
|
| 124 |
return agent
|
| 125 |
|
| 126 |
except Exception as e:
|
|
|
|
| 130 |
return None
|
| 131 |
|
| 132 |
|
| 133 |
+
def cleanup_agent():
|
| 134 |
+
"""Cleanup MCP client connection"""
|
| 135 |
+
global _global_agent, _global_mcp_client
|
| 136 |
+
|
| 137 |
+
if _global_mcp_client is not None:
|
| 138 |
+
try:
|
| 139 |
+
print("Disconnecting MCP client...")
|
| 140 |
+
_global_mcp_client.disconnect()
|
| 141 |
+
print("✅ MCP client disconnected")
|
| 142 |
+
except Exception as e:
|
| 143 |
+
print(f"[WARNING] Error disconnecting MCP client: {e}")
|
| 144 |
+
finally:
|
| 145 |
+
_global_mcp_client = None
|
| 146 |
+
_global_agent = None
|
| 147 |
+
|
| 148 |
+
|
| 149 |
def chat_with_agent(
|
| 150 |
message: str,
|
| 151 |
history: List[Tuple[str, str]],
|
|
|
|
| 208 |
gr.Markdown("*Autonomous AI agent powered by smolagents with MCP tools*")
|
| 209 |
|
| 210 |
# Info banner
|
| 211 |
+
with gr.Accordion("💡 About This Agent", open=False):
|
| 212 |
gr.Markdown("""
|
| 213 |
### 🎯 What is this?
|
| 214 |
This is an **autonomous AI agent** that can:
|
|
|
|
| 293 |
|
| 294 |
|
| 295 |
def on_clear_chat():
|
| 296 |
+
"""Handle clear button click and cleanup agent connection"""
|
| 297 |
+
# Cleanup agent and MCP client connection
|
| 298 |
+
cleanup_agent()
|
| 299 |
return [], "", "*Agent's reasoning steps will appear here...*"
|
| 300 |
|
| 301 |
|