Datasets:
Improve dataset card: Add metadata, links, abstract, sample usage, and citation
Browse filesThis PR significantly enhances the dataset card for the ROSE dataset by adding:
* `task_categories: video-to-video` and relevant `tags` (`video-inpainting`, `object-removal`, `diffusion-models`, `synthetic-data`, `benchmark`, `shadow-removal`, `reflection-removal`) to the metadata, improving discoverability.
* A clear introduction including the full paper abstract.
* Direct links to the Hugging Face paper page, the project page, and the GitHub repository.
* A detailed "Sample Usage" section, including installation steps and the `inference.py` command with its options, directly sourced from the GitHub README, along with a link to the interactive Hugging Face Space demo.
* The BibTeX citation for the paper.
These additions make the dataset card more informative and user-friendly, providing essential context and guidance for researchers on the Hugging Face Hub.
@@ -1,4 +1,102 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
task_categories:
|
4 |
+
- video-to-video
|
5 |
+
tags:
|
6 |
+
- video-inpainting
|
7 |
+
- object-removal
|
8 |
+
- diffusion-models
|
9 |
+
- synthetic-data
|
10 |
+
- benchmark
|
11 |
+
- shadow-removal
|
12 |
+
- reflection-removal
|
13 |
---
|
14 |
+
|
15 |
+
# ROSE: Remove Objects with Side Effects in Videos Dataset
|
16 |
+
|
17 |
+
This repository contains the dataset released alongside the paper [ROSE: Remove Objects with Side Effects in Videos](https://huggingface.co/papers/2508.18633).
|
18 |
+
|
19 |
+
* **Paper:** [ROSE: Remove Objects with Side Effects in Videos](https://huggingface.co/papers/2508.18633)
|
20 |
+
* **Project Page:** [https://rose2025-inpaint.github.io/](https://rose2025-inpaint.github.io/)
|
21 |
+
* **Code:** [https://github.com/Kunbyte-AI/ROSE](https://github.com/Kunbyte-AI/ROSE)
|
22 |
+
|
23 |
+
## Abstract
|
24 |
+
|
25 |
+
Video object removal has achieved advanced performance due to the recent success of video generative models. However, when addressing the side effects of objects, e.g., their shadows and reflections, existing works struggle to eliminate these effects for the scarcity of paired video data as supervision. This paper presents ROSE, termed Remove Objects with Side Effects, a framework that systematically studies the object's effects on environment, which can be categorized into five common cases: shadows, reflections, light, translucency and mirror. Given the challenges of curating paired videos exhibiting the aforementioned effects, we leverage a 3D rendering engine for synthetic data generation. We carefully construct a fully-automatic pipeline for data preparation, which simulates a large-scale paired dataset with diverse scenes, objects, shooting angles, and camera trajectories. ROSE is implemented as an video inpainting model built on diffusion transformer. To localize all object-correlated areas, the entire video is fed into the model for reference-based erasing. Moreover, additional supervision is introduced to explicitly predict the areas affected by side effects, which can be revealed through the differential mask between the paired videos. To fully investigate the model performance on various side effect removal, we presents a new benchmark, dubbed ROSE-Bench, incorporating both common scenarios and the five special side effects for comprehensive evaluation. Experimental results demonstrate that ROSE achieves superior performance compared to existing video object erasing models and generalizes well to real-world video scenarios.
|
26 |
+
|
27 |
+
---
|
28 |
+
|
29 |
+
The dataset presented in this paper is compiled exclusively from publicly available sources. If any material included in this dataset is found to infringe upon your rights, please contact us at [email protected], and we will promptly remove the relevant content. Please note that this dataset is intended solely for **academic research purposes**.
|
30 |
+
|
31 |
+
---
|
32 |
+
|
33 |
+
## Sample Usage
|
34 |
+
|
35 |
+
To get started with the code associated with the ROSE paper and perform a quick test, follow these steps.
|
36 |
+
|
37 |
+
### Dependencies and Installation
|
38 |
+
|
39 |
+
1. Clone the repository:
|
40 |
+
```bash
|
41 |
+
git clone https://github.com/Kunbyte-AI/ROSE.git
|
42 |
+
cd ROSE
|
43 |
+
```
|
44 |
+
|
45 |
+
2. Create a Conda environment and install dependencies:
|
46 |
+
```bash
|
47 |
+
conda create -n rose python=3.12 -y
|
48 |
+
conda activate rose
|
49 |
+
pip3 install -r requirements.txt
|
50 |
+
```
|
51 |
+
(Note: This requires specific versions of CUDA, PyTorch, and Torchvision as mentioned in the GitHub README).
|
52 |
+
|
53 |
+
### Prepare Pretrained Models
|
54 |
+
|
55 |
+
Before running inference, you need to prepare the pretrained models.
|
56 |
+
Download the Transformer3D weights of ROSE from [this link](https://huggingface.co/Kunbyte/ROSE) and arrange them in the `weights` directory as follows:
|
57 |
+
```
|
58 |
+
weights
|
59 |
+
βββ transformer
|
60 |
+
βββ config.json
|
61 |
+
βββ diffusion_pytorch_model.safetensors
|
62 |
+
```
|
63 |
+
|
64 |
+
Additionally, download the base model (`Wan2.1-Fun-1.3B-InP`) from [this link](https://huggingface.co/alibaba-pai/Wan2.1-Fun-1.3B-InP) and place it in the `models` directory. Refer to the [GitHub README's "Prepare pretrained models" section](https://github.com/Kunbyte-AI/ROSE#prepare-pretrained-models) for the exact directory structure.
|
65 |
+
|
66 |
+
### Quick Test
|
67 |
+
|
68 |
+
Examples are provided in the `data/eval` folder within the cloned repository. Run inference using the `inference.py` script:
|
69 |
+
|
70 |
+
```shell
|
71 |
+
python inference.py
|
72 |
+
```
|
73 |
+
|
74 |
+
The script accepts several options to customize the inference process:
|
75 |
+
```
|
76 |
+
Usage:
|
77 |
+
|
78 |
+
python inference.py [options]
|
79 |
+
|
80 |
+
Options:
|
81 |
+
--validation_videos Path(s) to input videos
|
82 |
+
--validation_masks Path(s) to mask videos
|
83 |
+
--validation_prompts Text prompts (default: [""])
|
84 |
+
--output_dir Output directory
|
85 |
+
--video_length Number of frames per video (It needs to be 16n+1.)
|
86 |
+
--sample_size Frame size: height width (default: 480 720)
|
87 |
+
```
|
88 |
+
|
89 |
+
You can also interact with an online demo of ROSE on [Hugging Face Spaces](https://huggingface.co/spaces/Kunbyte/ROSE).
|
90 |
+
|
91 |
+
## Citation
|
92 |
+
|
93 |
+
If you find our paper or dataset useful for your research, please consider citing:
|
94 |
+
|
95 |
+
```bibtex
|
96 |
+
@article{miao2025rose,
|
97 |
+
title={ROSE: Remove Objects with Side Effects in Videos},
|
98 |
+
author={Miao, Chenxuan and Feng, Yutong and Zeng, Jianshu and Gao, Zixiang and Liu, Hantang and Yan, Yunfeng and Qi, Donglian and Chen, Xi and Wang, Bin and Zhao, Hengshuang},
|
99 |
+
journal={arXiv preprint arXiv:2508.18633},
|
100 |
+
year={2025}
|
101 |
+
}
|
102 |
+
```
|