Improve model card: Update pipeline tag, add comprehensive details and demos
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,16 +1,111 @@
|
|
1 |
---
|
2 |
-
library_name: transformers
|
3 |
-
pipeline_tag: text-generation
|
4 |
-
license: mit
|
5 |
base_model:
|
6 |
- Qwen/QwQ-32B
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
---
|
8 |
|
9 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
11 |
-
|
12 |
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
|
|
|
|
|
|
2 |
base_model:
|
3 |
- Qwen/QwQ-32B
|
4 |
+
library_name: transformers
|
5 |
+
license: mit
|
6 |
+
pipeline_tag: image-text-to-text
|
7 |
+
tags:
|
8 |
+
- web-agent
|
9 |
+
- gui-agent
|
10 |
+
- multimodal
|
11 |
+
- reinforcement-learning
|
12 |
+
- react
|
13 |
---
|
14 |
|
15 |
+
# WebDancer: Towards Autonomous Information Seeking Agency
|
16 |
+
|
17 |
+
This repository contains the **WebDancer** model, presented in the paper [WebDancer: Towards Autonomous Information Seeking Agency](https://huggingface.co/papers/2505.22648).
|
18 |
+
|
19 |
+
**Code & Project Page**: https://github.com/Alibaba-NLP/WebAgent
|
20 |
+
|
21 |
+
## Abstract
|
22 |
+
Addressing intricate real-world problems necessitates in-depth information seeking and multi-step reasoning. Recent progress in agentic systems, exemplified by Deep Research, underscores the potential for autonomous multi-step research. In this work, we present a cohesive paradigm for building end-to-end agentic information seeking agents from a data-centric and training-stage perspective. Our approach consists of four key stages: (1) browsing data construction, (2) trajectories sampling, (3) supervised fine-tuning for effective cold start, and (4) reinforcement learning for enhanced generalisation. We instantiate this framework in a web agent based on the ReAct, WebDancer. Empirical evaluations on the challenging information seeking benchmarks, GAIA and WebWalkerQA, demonstrate the strong performance of WebDancer, achieving considerable results and highlighting the efficacy of our training paradigm. Further analysis of agent training provides valuable insights and actionable, systematic pathways for developing more capable agentic models. The codes and demo will be released in this https URL .
|
23 |
+
|
24 |
+
## Features for WebDancer
|
25 |
+
|
26 |
+
* Native agentic search reasoning model using ReAct framework towards autonomous information seeking agency and _Deep Research_-like model.
|
27 |
+
* We introduce a four-stage training paradigm comprising **browsing data construction, trajectory sampling, supervised fine-tuning for effective cold start, and reinforcement learning for improved generalization**, enabling the agent to autonomously acquire autonomous search and reasoning skills.
|
28 |
+
* Our data-centric approach integrates trajectory-level supervision fine-tuning and reinforcement learning (DAPO) to develop a scalable pipeline for **training agentic systems** via SFT or RL.
|
29 |
+
* WebDancer achieves a Pass@3 score of 64.1% on GAIA and 62.0% on WebWalkerQA.
|
30 |
+
|
31 |
+
## Quick Start
|
32 |
+
|
33 |
+
You need to enter the [`WebDancer`](https://github.com/Alibaba-NLP/WebAgent/tree/main/WebDancer) folder for the following commands.
|
34 |
+
|
35 |
+
### Step 0: Set Up the Environment
|
36 |
+
|
37 |
+
```bash
|
38 |
+
conda create -n webdancer python=3.12
|
39 |
+
pip install -r requirements.txt
|
40 |
+
```
|
41 |
+
|
42 |
+
### Step 1: Deploy the Model
|
43 |
+
|
44 |
+
Download the WebDancer model from [🤗 HuggingFace](https://huggingface.co/Alibaba-NLP/WebDancer-32B) and deploy it using the provided scripts with [sglang](https://github.com/sgl-project/sglang).
|
45 |
+
|
46 |
+
```bash
|
47 |
+
cd scripts
|
48 |
+
bash depoly_model.sh WebDancer_PATH
|
49 |
+
```
|
50 |
+
|
51 |
+
> **Note:** Replace `WebDancer_PATH` with the actual path to the downloaded model.
|
52 |
+
|
53 |
+
### Step 2: Run the Demo
|
54 |
+
|
55 |
+
Edit the following keys in [`WebDancer/scripts/run_demo.sh`](https://github.com/Alibaba-NLP/WebAgent/blob/main/WebDancer/scripts/run_demo.sh):
|
56 |
+
|
57 |
+
- `GOOGLE_SEARCH_KEY`
|
58 |
+
- `JINA_API_KEY`
|
59 |
+
- `DASHSCOPE_API_KEY`
|
60 |
+
|
61 |
+
Then, launch the demo with Gradio to interact with the WebDancer model:
|
62 |
+
|
63 |
+
```bash
|
64 |
+
cd scripts
|
65 |
+
bash run_demo.sh
|
66 |
+
```
|
67 |
+
|
68 |
+
## Demos
|
69 |
+
|
70 |
+
We provide demos for WebWalkerQA, GAIA and Daily Use.
|
71 |
+
Our model can execute the long-horizon tasks with **multiple steps** and **complex reasoning**, such as web traversal, information seeking and question answering.
|
72 |
+
|
73 |
+
<div align="center">
|
74 |
+
<h3>WebWalkerQA</h3>
|
75 |
+
<video src="https://github.com/user-attachments/assets/0bbaf55b-897e-4c57-967d-a6e8bbd2167e" />
|
76 |
+
</div>
|
77 |
+
|
78 |
+
<div align="center">
|
79 |
+
<h3>GAIA</h3>
|
80 |
+
<video src="https://github.com/user-attachments/assets/935c668e-6169-4712-9c04-ac80f0531872" />
|
81 |
+
</div>
|
82 |
+
|
83 |
+
<div align="center">
|
84 |
+
<h3>Daily Use</h3>
|
85 |
+
<video src="https://github.com/user-attachments/assets/d1d5b533-4009-478b-bd87-96b86389327d" />
|
86 |
+
</div>
|
87 |
+
|
88 |
+
## Citation
|
89 |
|
90 |
+
If this work is helpful, please kindly cite as:
|
91 |
|
92 |
+
```bibquery
|
93 |
+
@misc{wu2025webdancer,
|
94 |
+
title={WebDancer: Towards Autonomous Information Seeking Agency},
|
95 |
+
author={Jialong Wu and Baixuan Li and Runnan Fang and Wenbiao Yin and Liwen Zhang and Zhengwei Tao and Dingchu Zhang and Zekun Xi and Yong Jiang and Pengjun Xie and Fei Huang and Jingren Zhou},
|
96 |
+
year={2025},
|
97 |
+
eprint={2505.22648},
|
98 |
+
archivePrefix={arXiv},
|
99 |
+
primaryClass={cs.CL},
|
100 |
+
url={https://arxiv.org/abs/2505.22648},
|
101 |
+
}
|
102 |
+
@misc{wu2025webwalker,
|
103 |
+
title={WebWalker: Benchmarking LLMs in Web Traversal},
|
104 |
+
author={Jialong Wu and Wenbiao Yin and Yong Jiang and Zhenglin Wang and Zekun Xi and Runnan Fang and Deyu Zhou and Pengjun Xie and Fei Huang},
|
105 |
+
year={2025},
|
106 |
+
eprint={2501.07572},
|
107 |
+
archivePrefix={arXiv},
|
108 |
+
primaryClass={cs.CL},
|
109 |
+
url={https://arxiv.org/abs/2501.07572},
|
110 |
+
}
|
111 |
+
```
|