Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
video
video
label
class label
84 classes
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
30Env1_Edited
End of preview. Expand in Data Studio

ROSE: Remove Objects with Side Effects in Videos Dataset

This repository contains the dataset released alongside the paper ROSE: Remove Objects with Side Effects in Videos.

Abstract

Video object removal has achieved advanced performance due to the recent success of video generative models. However, when addressing the side effects of objects, e.g., their shadows and reflections, existing works struggle to eliminate these effects for the scarcity of paired video data as supervision. This paper presents ROSE, termed Remove Objects with Side Effects, a framework that systematically studies the object's effects on environment, which can be categorized into five common cases: shadows, reflections, light, translucency and mirror. Given the challenges of curating paired videos exhibiting the aforementioned effects, we leverage a 3D rendering engine for synthetic data generation. We carefully construct a fully-automatic pipeline for data preparation, which simulates a large-scale paired dataset with diverse scenes, objects, shooting angles, and camera trajectories. ROSE is implemented as an video inpainting model built on diffusion transformer. To localize all object-correlated areas, the entire video is fed into the model for reference-based erasing. Moreover, additional supervision is introduced to explicitly predict the areas affected by side effects, which can be revealed through the differential mask between the paired videos. To fully investigate the model performance on various side effect removal, we presents a new benchmark, dubbed ROSE-Bench, incorporating both common scenarios and the five special side effects for comprehensive evaluation. Experimental results demonstrate that ROSE achieves superior performance compared to existing video object erasing models and generalizes well to real-world video scenarios.


The dataset presented in this paper is compiled exclusively from publicly available sources. If any material included in this dataset is found to infringe upon your rights, please contact us at [email protected], and we will promptly remove the relevant content. Please note that this dataset is intended solely for academic research purposes.


Sample Usage

To get started with the code associated with the ROSE paper and perform a quick test, follow these steps.

Dependencies and Installation

  1. Clone the repository:

    git clone https://github.com/Kunbyte-AI/ROSE.git
    cd ROSE
    
  2. Create a Conda environment and install dependencies:

    conda create -n rose python=3.12 -y
    conda activate rose
    pip3 install -r requirements.txt
    

    (Note: This requires specific versions of CUDA, PyTorch, and Torchvision as mentioned in the GitHub README).

Prepare Pretrained Models

Before running inference, you need to prepare the pretrained models. Download the Transformer3D weights of ROSE from this link and arrange them in the weights directory as follows:

weights
 ├── transformer
   ├── config.json
   ├── diffusion_pytorch_model.safetensors

Additionally, download the base model (Wan2.1-Fun-1.3B-InP) from this link and place it in the models directory. Refer to the GitHub README's "Prepare pretrained models" section for the exact directory structure.

Quick Test

Examples are provided in the data/eval folder within the cloned repository. Run inference using the inference.py script:

python inference.py

The script accepts several options to customize the inference process:

Usage:

python inference.py [options]

Options:
  --validation_videos  Path(s) to input videos 
  --validation_masks   Path(s) to mask videos 
  --validation_prompts Text prompts (default: [""])
  --output_dir         Output directory 
  --video_length       Number of frames per video (It needs to be 16n+1.)
  --sample_size        Frame size: height width (default: 480 720)

You can also interact with an online demo of ROSE on Hugging Face Spaces.

Citation

If you find our paper or dataset useful for your research, please consider citing:

@article{miao2025rose,
   title={ROSE: Remove Objects with Side Effects in Videos}, 
   author={Miao, Chenxuan and Feng, Yutong and Zeng, Jianshu and Gao, Zixiang and Liu, Hantang and Yan, Yunfeng and Qi, Donglian and Chen, Xi and Wang, Bin and Zhao, Hengshuang},
   journal={arXiv preprint arXiv:2508.18633},
   year={2025}
}
Downloads last month
3,336