syedaoon commited on
Commit
97d5ae0
·
verified ·
1 Parent(s): 97f8e75

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -47
README.md CHANGED
@@ -1,48 +1,2 @@
1
- # ZERO-IG
2
 
3
- ### Zero-Shot Illumination-Guided Joint Denoising and Adaptive Enhancement for Low-Light Images [cvpr2024]
4
-
5
- By Yiqi Shi, Duo Liu, LiguoZhang,Ye Tian, Xuezhi Xia, Xiaojing Fu
6
-
7
-
8
- #[[Paper]](https://openaccess.thecvf.com/content/CVPR2024/papers/Shi_ZERO-IG_Zero-Shot_Illumination-Guided_Joint_Denoising_and_Adaptive_Enhancement_for_Low-Light_CVPR_2024_paper.pdf) [[Supplement Material]](https://openaccess.thecvf.com/content/CVPR2024/supplemental/Shi_ZERO-IG_Zero-Shot_Illumination-Guided_CVPR_2024_supplemental.pdf)
9
-
10
- # Zero-IG Framework
11
-
12
- <img src="Figs/Fig3.png" width="900px"/>
13
- <p style="text-align:justify">Note that the provided model in this code are not the model for generating results reported in the paper.
14
-
15
- ## Model Training Configuration
16
- * To train a new model, specify the dataset path in "train.py" and execute it. The trained model will be stored in the 'weights' folder, while intermediate visualization outputs will be saved in the 'results' folder.
17
- * We have provided some model parameters, but we recommend training with a single image for better result.
18
-
19
- ## Requirements
20
- * Python 3.7
21
- * PyTorch 1.13.0
22
- * CUDA 11.7
23
- * Torchvision 0.14.1
24
-
25
- ## Testing
26
- * Ensure the data is prepared and placed in the designated folder.
27
- * Select the appropriate model for testing, which could be a model trained by yourself.
28
- * Execute "test.py" to perform the testing.
29
-
30
- ## [VILNC Dataset](https://pan.baidu.com/s/1-Uw78IxlVAVY_hqRRS9BGg?pwd=4e5c )
31
-
32
- The Varied Indoor Luminance & Nightscapes Collection (VILNC Dataset) is a meticulously curated assembly of 500 real-world low-light images, captured with the precision of a Canon EOS 550D camera. This dataset is segmented into two main environments, comprising 460 indoor scenes and 40 outdoor landscapes. Within the indoor category, each scene is represented through a trio of images, each depicting a distinct level of dim luminance, alongside a corresponding reference image captured under normal lighting conditions. For the outdoor scenes, the dataset includes low-light photographs, each paired with its respective normal light reference image, providing a comprehensive resource for analyzing and enhancing low-light imaging techniques.
33
-
34
- <img src="Figs/Dataset.png" width="900px"/>
35
- <p style="text-align:justify">
36
-
37
-
38
-
39
- ## Citation
40
- ```bibtex
41
- @inproceedings{shi2024zero,
42
- title={ZERO-IG: Zero-Shot Illumination-Guided Joint Denoising and Adaptive Enhancement for Low-Light Images},
43
- author={Shi, Yiqi and Liu, Duo and Zhang, Liguo and Tian, Ye and Xia, Xuezhi and Fu, Xiaojing},
44
- booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
45
- pages={3015--3024},
46
- year={2024}
47
- }
48
- ```
 
1
+ ZERO-IG
2