File size: 2,780 Bytes
			
			7f59bce 90c4a32 37c8e21 7f59bce 90c4a32 7f59bce 90c4a32 7f59bce 90c4a32 7f59bce 37c8e21 90c4a32  | 
								1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64  | 
								ProactiveVideoQA: A Comprehensive Benchmark Evaluating Proactive Interactions in Video Large Language Models
---
<div align="center">
  <div style="margin: 30px 0">
    <a href="https://arxiv.org/abs/2507.09313" style="margin: 0 10px">📄 arXiv Paper</a> |
    <a href="https://github.com/yellow-binary-tree/ProactiveVideoQA" style="margin: 0 10px"> 🖥️ Github Code </a> |
    <a href="https://huggingface.co/datasets/wangyueqian/ProactiveVideoQA" style="margin: 0 10px">📦 Data</a>
  </div>
</div>
## Introduction
ProactiveVideoQA is the first comprehensive benchmark designed to evaluate a system's ability to engage in proactive interaction in multimodal dialogue settings.
Unlike traditional turn-by-turn dialogue systems, in proactive intraction model need to determine when to repsond during the playback, so both response timing and response textual content are important points for evaluation.
## Dataset Statistics
ProactiveVideoQA contains 4 tasks:
1. **Proactive web-video QA** `[WEB]`: centering on general web-video understanding.
1. **Proactive ego-centric video QA** `[EGO]`: centering on first-person-view video comprehension, particularly relevant in robotics and daily assistant applications.
1. **Proactive TV-series video QA** `[TV]`: emphasizing dialogue and social relationship understanding with speech input, and
1. **Proactive video anomaly detection** `[VAD]` targeting surveillance video monitoring and alerting.
- **1377** videos from different sources
- **1427** different qeustions, and **3510** ground truth reply turns
- Fully proactive questions and open-ended answers ✅
## Data Format
Each test example in `{dataset}/anno.json` has the following format:
```json
{
    "question_id": "OSfMU69X3C4.7.mp4", // unique identifier for this test example
    "video": "OSfMU69X3C4.7.mp4",       // video file name in `video` folder
    "conversation": [       // model input
        {"role": "user", "time": 0, "content": "What are the people doing in the office?"}
    ],
    "answer": [     // expected model output
        {       // model are expected to reply with the content in the reply timespan
            "role": "assistant", "content": "People are working at workstations.",
            "reply_timespan": [0.0, 9.88]
        },
        { ... } 
    ]
}
```
## Citation
```bibtex
@misc{wang2025proactivevideoqacomprehensivebenchmarkevaluating,
      title={ProactiveVideoQA: A Comprehensive Benchmark Evaluating Proactive Interactions in Video Large Language Models}, 
      author={Yueqian Wang and Xiaojun Meng and Yifan Wang and Huishuai Zhang and Dongyan Zhao},
      year={2025},
      eprint={2507.09313},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2507.09313}, 
}
```
 |