Update README.md
Browse files
README.md
CHANGED
@@ -1,32 +1,144 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
4 |
-
|
5 |
# MVInpainter
|
6 |
[NeurIPS 2024] MVInpainter: Learning Multi-View Consistent Inpainting to Bridge 2D and 3D Editing
|
7 |
|
8 |
[[arXiv]](https://arxiv.org/pdf/2408.08000) [[Project Page]](https://ewrfcas.github.io/MVInpainter/)
|
9 |
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
### Dataset preparation (training)
|
15 |
1. Downloading [Co3dv2](https://github.com/facebookresearch/co3d), [MVImgNet](https://github.com/GAP-LAB-CUHK-SZ/MVImgNet) for MVInpainter-O.
|
16 |
Downloading [Real10k](https://google.github.io/realestate10k/download.html), [DL3DV](https://github.com/DL3DV-10K/Dataset), [Scannet++](https://kaldir.vc.in.tum.de/scannetpp) for MVInpainter-F.
|
17 |
-
2. Downloading information of indices
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
-
###
|
20 |
-
|
21 |
-
|
22 |
-
2. [AnimateDiff weights](). We revise the key name for easier ```peft``` usages.
|
23 |
|
|
|
24 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
|
|
26 |
|
27 |
|
28 |
## Cite
|
29 |
-
If you found our
|
30 |
|
31 |
```
|
32 |
@article{cao2024mvinpainter,
|
|
|
|
|
|
|
|
|
|
|
1 |
# MVInpainter
|
2 |
[NeurIPS 2024] MVInpainter: Learning Multi-View Consistent Inpainting to Bridge 2D and 3D Editing
|
3 |
|
4 |
[[arXiv]](https://arxiv.org/pdf/2408.08000) [[Project Page]](https://ewrfcas.github.io/MVInpainter/)
|
5 |
|
6 |
+
## Preparation
|
7 |
+
|
8 |
+
### Setup repository and environment
|
9 |
+
```
|
10 |
+
git clone https://github.com/ewrfcas/MVInpainter.git
|
11 |
+
cd MVInpainter
|
12 |
+
|
13 |
+
conda create -n mvinpainter python=3.8
|
14 |
+
conda activate mvinpainter
|
15 |
|
16 |
+
pip install -r requirements.txt
|
17 |
+
mim install mmcv-full
|
18 |
+
pip install mmflow
|
19 |
+
|
20 |
+
# We need to replace the new decoder py of mmflow for faster flow estimation
|
21 |
+
cp ./check_points/mmflow/raft_decoder.py /usr/local/conda/envs/mvinpainter/lib/python3.8/site-packages/mmflow/models/decoders/
|
22 |
+
```
|
23 |
|
24 |
### Dataset preparation (training)
|
25 |
1. Downloading [Co3dv2](https://github.com/facebookresearch/co3d), [MVImgNet](https://github.com/GAP-LAB-CUHK-SZ/MVImgNet) for MVInpainter-O.
|
26 |
Downloading [Real10k](https://google.github.io/realestate10k/download.html), [DL3DV](https://github.com/DL3DV-10K/Dataset), [Scannet++](https://kaldir.vc.in.tum.de/scannetpp) for MVInpainter-F.
|
27 |
+
2. Downloading information of indices, masking formats, and captions from [Link](). Put them to `./data`. Note that we remove some dirty samples from aforementioned datasets. Since Co3dv2 data contains object masks but MVImgNet does not, we additionally provide complete [foreground masks]() for MVImgNet through `CarveKit`. Please put the MVImgNet masks to `./data/mvimagenet/masks`.
|
28 |
+
|
29 |
+
### Pretrained weights
|
30 |
+
1. [RAFT weights]() (put it to `./check_points/mmflow/`).
|
31 |
+
2. [SD1.5-inpainting]() (put it to `./check_points/`).
|
32 |
+
3. [AnimateDiff weights](). We revise the key name for easier `peft` usages (put it to `./check_points/`).
|
33 |
+
|
34 |
+
## Training
|
35 |
+
|
36 |
+
Training with fixed nframe=12:
|
37 |
+
```
|
38 |
+
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 accelerate launch --mixed_precision="fp16" --num_processes=8 --num_machines 1 --main_process_port 29502 \
|
39 |
+
--config_file configs/deepspeed/acc_zero2.yaml train.py \
|
40 |
+
--config_file="configs/mvinpainter_{o,f}.yaml" \
|
41 |
+
--output_dir="check_points/mvinpainter_{o,f}_256" \
|
42 |
+
--train_log_interval=250 \
|
43 |
+
--val_interval=2000 \
|
44 |
+
--val_cfg=7.5 \
|
45 |
+
--img_size=256
|
46 |
+
```
|
47 |
+
|
48 |
+
Finetuning with dynamic frames (8~24):
|
49 |
+
```
|
50 |
+
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 accelerate launch --mixed_precision="fp16" --num_processes=8 --num_machines 1 --main_process_port 29502 \
|
51 |
+
--config_file configs/deepspeed/acc_zero2.yaml train.py \
|
52 |
+
--config_file="configs/mvinpainter_{o,f}.yaml" \
|
53 |
+
--output_dir="check_points/mvinpainter_{o,f}_256" \
|
54 |
+
--train_log_interval=250 \
|
55 |
+
--val_interval=2000 \
|
56 |
+
--val_cfg=7.5 \
|
57 |
+
--img_size=256 \
|
58 |
+
--resume_from_checkpoint="latest" \
|
59 |
+
--dynamic_nframe \
|
60 |
+
--low_nframe 8 \
|
61 |
+
--high_nframe 24
|
62 |
+
```
|
63 |
+
Please use `mvinpainter_{o,f}_512.yaml` to train 512x512 models.
|
64 |
+
|
65 |
+
## Inference
|
66 |
|
67 |
+
### Model weights
|
68 |
+
1. [MVSInpainter-O]() (Novel view synthesis, put it to `./check_points/`).
|
69 |
+
2. [MVSInpainter-F]() (Removal, put it to `./check_points/`).
|
|
|
70 |
|
71 |
+
### Pipeline
|
72 |
|
73 |
+
1. Removing or synthesis foreground of the first view through 2D-inpainting. We recommend using [Fooocus-inpainting](https://github.com/lllyasviel/Fooocus) to accomplish this. Getting tracking masks through [Track-Anything](https://github.com/gaomingqi/Track-Anything).
|
74 |
+
Some examples are provided in `./demo`.
|
75 |
+
```
|
76 |
+
- <folder>
|
77 |
+
- images # input images with foregrounds
|
78 |
+
- inpainted # inpainted result of the first view
|
79 |
+
- masks # masks for images
|
80 |
+
```
|
81 |
+
2. (Optional) removing foregrounds from all other views through `MVInpainter-F`:
|
82 |
+
```
|
83 |
+
CUDA_VISIBLE_DEVICES=0 python test_removal.py \
|
84 |
+
--load_path="check_points/mvinpainter_f_256" \
|
85 |
+
--dataset_root="./demo/removal" \
|
86 |
+
--output_path="demo_removal" \
|
87 |
+
--resume_from_checkpoint="best" \
|
88 |
+
--val_cfg=5.0 \
|
89 |
+
--img_size=256 \
|
90 |
+
--sampling_interval=1.0 \
|
91 |
+
--dataset_names realworld \
|
92 |
+
--reference_path="inpainted" \
|
93 |
+
--nframe=24 \
|
94 |
+
--save_images # (whether to save samples respectively)
|
95 |
+
```
|
96 |
+

|
97 |
+
|
98 |
+
3. Achieving 3d bbox of the object generated from 2D-inpainting through `python draw_bbox.py`. Put the image `000x.png` and `000x.json` from `./bbox` to `obj_bbox` of the target folder.
|
99 |
+
|
100 |
+

|
101 |
+
|
102 |
+
4. Mask adaption to achieve `warp_masks`. If the basic plane where the foreground placed on enjoys a small percentage of the whole image, please use methods like [Grounded-Segment-Anything](https://github.com/IDEA-Research/Grounded-Segment-Anything) to get `plane_masks`.
|
103 |
+
```
|
104 |
+
CUDA_VISIBLE_DEVICES=0 python mask_adaption.py --input_path="demo/nvs/kitchen" --edited_index=0
|
105 |
+
```
|
106 |
+
You can also use `--no_irregular_mask` to disable irregular mask for more precise warped masks.
|
107 |
+
|
108 |
+

|
109 |
+
|
110 |
+
Make sure the final folder looks like:
|
111 |
+
```
|
112 |
+
- <folder>
|
113 |
+
- obj_bbox # inpainted 2d images with new foreground and bbox json
|
114 |
+
- removal # images without foregrounds
|
115 |
+
- warp_masks # masks from adaption for the removal folder
|
116 |
+
- plane_masks # (optional, only for mask_adaption) masks of basic plane where the foreground is placed on
|
117 |
+
```
|
118 |
+
|
119 |
+
5. Run `MVInpainter-O` for novel view synthesis:
|
120 |
+
```
|
121 |
+
CUDA_VISIBLE_DEVICES=0 python test_nvs.py \
|
122 |
+
--load_path="check_points/mvinpainter_o_256" \
|
123 |
+
--dataset_root="./demo/nvs" \
|
124 |
+
--output_path="demo_nvs" \
|
125 |
+
--edited_index=0 \
|
126 |
+
--resume_from_checkpoint="best" \
|
127 |
+
--val_cfg=7.5 \
|
128 |
+
--img_height=256 \
|
129 |
+
--img_width=256 \
|
130 |
+
--sampling_interval=1.0 \
|
131 |
+
--nframe=24 \
|
132 |
+
--prompt="a red apple with circle and round shape on the table." \
|
133 |
+
--limit_frame=24
|
134 |
+
```
|
135 |
+

|
136 |
|
137 |
+
6. 3D reconstruction: See [Dust3R](https://github.com/naver/dust3r), [MVSFormer++](https://github.com/maybeLx/MVSFormerPlusPlus), and [3DGS](https://github.com/graphdeco-inria/gaussian-splatting) for more details.
|
138 |
|
139 |
|
140 |
## Cite
|
141 |
+
If you found our project helpful, please consider citing:
|
142 |
|
143 |
```
|
144 |
@article{cao2024mvinpainter,
|