ProactiveVideoQA / README.md
wangyueqian's picture
Update README.md
7f59bce verified

ProactiveVideoQA: A Comprehensive Benchmark Evaluating Proactive Interactions in Video Large Language Models

Introduction

ProactiveVideoQA is the first comprehensive benchmark designed to evaluate a system's ability to engage in proactive interaction in multimodal dialogue settings. Unlike traditional turn-by-turn dialogue systems, in proactive intraction model need to determine when to repsond during the playback, so both response timing and response textual content are important points for evaluation.

Dataset Statistics

ProactiveVideoQA contains 4 tasks:

  1. Proactive web-video QA [WEB]: centering on general web-video understanding.

  2. Proactive ego-centric video QA [EGO]: centering on first-person-view video comprehension, particularly relevant in robotics and daily assistant applications.

  3. Proactive TV-series video QA [TV]: emphasizing dialogue and social relationship understanding with speech input, and

  4. Proactive video anomaly detection [VAD] targeting surveillance video monitoring and alerting.

  • 1377 videos from different sources
  • 1427 different qeustions, and 3510 ground truth reply turns
  • Fully proactive questions and open-ended answers ✅

Data Format

Each test example in {dataset}/anno.json has the following format:

{
    "question_id": "OSfMU69X3C4.7.mp4", // unique identifier for this test example
    "video": "OSfMU69X3C4.7.mp4",       // video file name in `video` folder
    "conversation": [       // model input
        {"role": "user", "time": 0, "content": "What are the people doing in the office?"}
    ],
    "answer": [     // expected model output
        {       // model are expected to reply with the content in the reply timespan
            "role": "assistant", "content": "People are working at workstations.",
            "reply_timespan": [0.0, 9.88]
        },
        { ... } 
    ]
}

Citation

@misc{wang2025proactivevideoqacomprehensivebenchmarkevaluating,
      title={ProactiveVideoQA: A Comprehensive Benchmark Evaluating Proactive Interactions in Video Large Language Models}, 
      author={Yueqian Wang and Xiaojun Meng and Yifan Wang and Huishuai Zhang and Dongyan Zhao},
      year={2025},
      eprint={2507.09313},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2507.09313}, 
}