--- license: cc-by-nc-4.0 task_categories: - video-to-video --- # LongV-EVAL: A Benchmark for Long Video Editing Evaluation [Paper](https://huggingface.co/papers/2502.05433) LongV-EVAL is a benchmark dataset designed for evaluating text-driven long video editing methods. It consists of 75 high-quality videos, each approximately one minute long, covering diverse domains such as landscapes, people, and animals. The dataset provides meticulously annotated editing prompts for three aspects: foreground, background, and style, enabling comprehensive evaluation of editing quality, temporal consistency, and semantic alignment. ## Dataset Structure The dataset is organized into four folders: - `videos/`: Contains 75 MP4 files of source videos (original unedited videos). - `foreground/`: Includes 75 text files with prompts focusing on **foreground object editing** (e.g., changing object attributes or replacing objects). - `background/`: Includes 75 text files with prompts for **background modification** (e.g., altering scene context or tone). - `style/`: Includes 75 text files with prompts for **artistic style transfer** (e.g., applying styles like Van Gogh, watercolor, or Picasso).