Video-to-Video
SeedVR
jianyi.wang commited on
Commit
bbac7fc
·
1 Parent(s): 690b8d6
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,64 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ <div align="center">
6
+ <img src="assets/seedvr_logo.png" alt="SeedVR" width="400"/>
7
+ </div>
8
+
9
+
10
+ # SeedVR2: One-Step Video Restoration via Diffusion Adversarial Post-Training
11
+ > [Jianyi Wang](https://iceclear.github.io), [Shanchuan Lin](https://scholar.google.com/citations?user=EDWUw7gAAAAJ&hl=en), [Zhijie Lin](https://scholar.google.com/citations?user=xXMj6_EAAAAJ&hl=en), [Yuxi Ren](https://scholar.google.com.hk/citations?user=C_6JH-IAAAAJ&hl=en), [Meng Wei](https://openreview.net/profile?id=~Meng_Wei11), [Zongsheng Yue](https://zsyoaoa.github.io/), [Shangchen Zhou](https://shangchenzhou.com/), [Hao Chen](https://haochen-rye.github.io/), [Yang Zhao](https://scholar.google.com/citations?user=uPmTOHAAAAAJ&hl=en), [Ceyuan Yang](https://ceyuan.me/), [Xuefeng Xiao](https://scholar.google.com/citations?user=CVkM9TQAAAAJ&hl=en), [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/index.html), [Lu Jiang](http://www.lujiang.info/)
12
+
13
+ <p align="center">
14
+ <a href="https://iceclear.github.io/projects/seedvr2/">
15
+ <img
16
+ src="https://img.shields.io/badge/SeedVR2-Website-0A66C2?logo=safari&logoColor=white"
17
+ alt="SeedVR Website"
18
+ />
19
+ </a>
20
+ <a href="http://arxiv.org/abs/2506.05301">
21
+ <img
22
+ src="https://img.shields.io/badge/SeedVR2-Paper-red?logo=arxiv&logoColor=red"
23
+ alt="SeedVR2 Paper on ArXiv"
24
+ />
25
+ </a>
26
+ <a href="https://www.youtube.com/watch?v=tM8J-WhuAH0" target='_blank'>
27
+ <img
28
+ src="https://img.shields.io/badge/Demo%20Video-%23FF0000.svg?logo=YouTube&logoColor=white"
29
+ alt="SeedVR2 Video Demo on YouTube"
30
+ />
31
+ </a>
32
+ </p>
33
+
34
+ >
35
+ > Recent advances in diffusion-based video restoration (VR) demonstrate significant improvement in visual quality, yet yield a prohibitive computational cost during inference. While several distillation-based approaches have exhibited the potential of one-step image restoration, extending existing approaches to VR remains challenging and underexplored, due to the limited generation ability and poor temporal consistency, particularly when dealing with high-resolution video in real-world settings. In this work, we propose a one-step diffusion-based VR model, termed as AnonymousVR, which performs adversarial VR training against real data. To handle the challenging high-resolution VR within a single step, we introduce several enhancements to both model architecture and training procedures. Specifically, an adaptive window attention mechanism is proposed, where the window size is dynamically adjusted to fit the output resolutions, avoiding window inconsistency observed under high-resolution VR using window attention with a predefined window size. To stabilize and improve the adversarial post-training towards VR, we further verify the effectiveness of a series of losses, including a proposed feature matching loss without significantly sacrificing training efficiency. Extensive experiments show that AnonymousVR can achieve comparable or even better performance compared with existing VR approaches in a single step.
36
+
37
+ <p align="center"><img src="assets/teaser.png" width="100%"></p>
38
+
39
+
40
+ ## 📮 Notice
41
+ **Limitations:** These are the prototype models and the performance may not be perfectly align with the paper. Our methods are sometimes not robust to heavy degradations and very large motions, and shares some failure cases with existing methods, e.g., fail to fully remove the degradation or simply generate unpleasing details. Moreover, due to the strong generation ability, Our methods tend to overly generate details on inputs with very light degradations, e.g., 720p AIGC videos, leading to oversharpened results occasionally.
42
+
43
+
44
+ ## ✍️ Citation
45
+
46
+ ```bibtex
47
+ @article{wang2025seedvr2,
48
+ title={SeedVR2: One-Step Video Restoration via Diffusion Adversarial Post-Training},
49
+ author={Wang, Jianyi and Lin, Shanchuan and Lin, Zhijie and Ren, Yuxi and Wei, Meng and Yue, Zongsheng and Zhou, Shangchen and Chen, Hao and Zhao, Yang and Yang, Ceyuan and Xiao, Xuefeng and Loy, Chen Change and Jiang, Lu},
50
+ booktitle={arXiv preprint arXiv:2506.05301},
51
+ year={2025}
52
+ }
53
+
54
+ @inproceedings{wang2025seedvr,
55
+ title={SeedVR: Seeding Infinity in Diffusion Transformer Towards Generic Video Restoration},
56
+ author={Wang, Jianyi and Lin, Zhijie and Wei, Meng and Zhao, Yang and Yang, Ceyuan and Loy, Chen Change and Jiang, Lu},
57
+ booktitle={CVPR},
58
+ year={2025}
59
+ }
60
+ ```
61
+
62
+
63
+ ## 📜 License
64
+ SeedVR and SeedVR2 are licensed under the Apache 2.0.
assets/seedvr_logo.png ADDED

Git LFS Details

  • SHA256: 9a08170e6ce79f87ad524d4ad5c083c8f1766f245d9dfea6b54fde5dca00f4a2
  • Pointer size: 131 Bytes
  • Size of remote file: 168 kB
assets/teaser.png ADDED

Git LFS Details

  • SHA256: 1aa53097c719d208642536f4684a174a260ab63ed4dda6515d83a2cb6b4f76ff
  • Pointer size: 132 Bytes
  • Size of remote file: 1.34 MB
ema_vae.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7df8a67e68b7f9aca3d5d2153d2ce8ab4373687741a0f9ce87cb356ace51cac
3
+ size 1002691902
seedvr2_ema_7b.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1b2ae25505607e61f2a7dc7967ba778aaf3e3626d9969ce6e24c52d9ddebfcd
3
+ size 32958774606