VidChain-exercise / README.md
simplecloud's picture
Upload README.md with huggingface_hub
4836070 verified

✏️ Data for VidChain Excercise

VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning

Ji Soo Lee*, Jongha Kim*, Jeehye Na, Jinyoung Park, Hyunwoo J. Kim†.

AAAI 2025

🎯 Learning Objectives

By working through this exercise, you will:

  • Reproduce baseline behavior of a video-language model (VTimeLLM, CVPR 2024 Highlight).
  • Observe the limitations of existing approaches in temporal reasoning and coherence.
  • Implement and experiment with VidChain's improvements using M-DPO.
  • Run inference on videos to generate dense temporal captions (Dense Video Captioning).
  • Evaluate how preference alignment improves performance over baselines.
  • Discuss potential strategies for ensembling different reasoning paths of VidChain's CoTasks.

Citations 🌱

@inproceedings{lee2025vidchain,
  title={VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Captioning},
  author={Lee, Ji Soo and Kim, Jongha and Na, Jeehye and Park, Jinyoung and Kim, Hyunwoo J},
  booktitle={AAAI},
  year={2025}
}