btrabucco commited on
Commit
c67e7cf
Β·
verified Β·
1 Parent(s): ac2734e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -10
README.md CHANGED
@@ -1,15 +1,15 @@
1
- ---
2
- title: README
3
- emoji: πŸš€
4
- colorFrom: blue
5
- colorTo: pink
6
- sdk: static
7
- pinned: false
8
- ---
9
 
10
  # InSTA: Towards Internet-Scale Training For Agents
11
 
12
- ![Pipeline Overview](https://data-for-agents.github.io/static/images/pipeline_overview.png)
13
 
14
  **Brandon Trabucco (1) Gunnar Sigurdsson (2) Robinson Piramuthu (2) Ruslan Salakhutdinov (1)**
15
 
@@ -17,4 +17,22 @@ pinned: false
17
 
18
  The predominant approach for training web navigation agents gathers human demonstrations for a set of popular websites and hand-written tasks, but it is becoming clear that human data are an inefficient resource. We develop a pipeline to facilitate Internet-scale training for agents without laborious human annotations. In the first stage, an LLM generates tasks for 150k diverse websites. In the next stage, LLM agents complete tasks and produce trajectories. In the final stage, an LLM reviews the trajectories and judges their success. Language models are competitive with human annotators, detecting and filtering out harmful content with an accuracy of 97%, generating feasible tasks with an 89% rate, and judging successful trajectories with an 82.6% accuracy. Scaling the pipeline, agents based on Llama 3.1 70B solve 16.7% of tasks for 150k sites. Training on the data generated by our pipeline is competitive with training on human demonstrations. In data-limited settings derived from Mind2Web and WebLINX, we improve Step Accuracy by up to +89.5% and +122.1% respectively for agents trained on mixtures of data from our pipeline, and human data. When training agents with all available human data from these benchmarks, agents fail to generalize to diverse real sites, and adding our data improves their generalization by +149.0% for WebLINX and +156.3% for Mind2Web. Code available at: [data-for-agents.github.io](https://data-for-agents.github.io).
19
 
20
- [website](https://data-for-agents.github.io) | [paper](https://arxiv.org/abs/2502.06776) | [code](https://github.com/data-for-agents)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: README
3
+ emoji: πŸš€
4
+ colorFrom: blue
5
+ colorTo: pink
6
+ sdk: static
7
+ pinned: false
8
+ ---
9
 
10
  # InSTA: Towards Internet-Scale Training For Agents
11
 
12
+ ![Pipeline Overview](./static/images/data-for-agents-teaser.gif)
13
 
14
  **Brandon Trabucco (1) Gunnar Sigurdsson (2) Robinson Piramuthu (2) Ruslan Salakhutdinov (1)**
15
 
 
17
 
18
  The predominant approach for training web navigation agents gathers human demonstrations for a set of popular websites and hand-written tasks, but it is becoming clear that human data are an inefficient resource. We develop a pipeline to facilitate Internet-scale training for agents without laborious human annotations. In the first stage, an LLM generates tasks for 150k diverse websites. In the next stage, LLM agents complete tasks and produce trajectories. In the final stage, an LLM reviews the trajectories and judges their success. Language models are competitive with human annotators, detecting and filtering out harmful content with an accuracy of 97%, generating feasible tasks with an 89% rate, and judging successful trajectories with an 82.6% accuracy. Scaling the pipeline, agents based on Llama 3.1 70B solve 16.7% of tasks for 150k sites. Training on the data generated by our pipeline is competitive with training on human demonstrations. In data-limited settings derived from Mind2Web and WebLINX, we improve Step Accuracy by up to +89.5% and +122.1% respectively for agents trained on mixtures of data from our pipeline, and human data. When training agents with all available human data from these benchmarks, agents fail to generalize to diverse real sites, and adding our data improves their generalization by +149.0% for WebLINX and +156.3% for Mind2Web. Code available at: [data-for-agents.github.io](https://data-for-agents.github.io).
19
 
20
+ [website](https://data-for-agents.github.io) | [paper](https://arxiv.org/abs/2502.06776) | [code](https://github.com/data-for-agents/insta)
21
+
22
+ ## Section 4 - Internet-Scale Task Generation
23
+
24
+ ![Task Generation Overview](./static/images/task_generation_overview.png)
25
+
26
+ **Task proposal and filtering for 150k live websites.** Starting from 1,000,000 websites, we employ a pretrained language model that marks sites as safe/unsafe for annotation, and assigns a realistic task that a hypothetical user might want to accomplish on each site. The task proposer rejects 85% of websites from the pipeline, resulting in 150k safe websites annotated with realistic tasks.
27
+
28
+ ## Section 5 - Internet-Scale Agents
29
+
30
+ ![Environment Overview](./static/images/environment_overview.png)
31
+
32
+ **Automatic evaluation for agents with language model judges.** Building on the large and diverse set of tasks generated by the pipeline, we employ pretrained language models to attempt and evaluate web navigation tasks. We dispatch language model agents to perform tasks by making calls to the Playwright API. We then employ language model judges to evaluate rollouts from agents.
33
+
34
+ ## Section 6 - Training Agents
35
+
36
+ ![Training Results](./static/images/training-results.png)
37
+
38
+ **Training agents with internet-scale data.** In data-limited settings derived from Mind2Web and WebLINX (left plot), we improve Step Accuracy by up to +89.5% and +122.1% respectively for agents trained on mixtures of data from our pipeline, and human data. When training agents with all available human data from these benchmarks (right plot), agents trained on existing human data struggle to generalize to diverse real sites, and adding our data improves their generalization by +149.0% for WebLINX and +156.3% for Mind2Web.