d3po_datasets / README.md
yangkaiSIGS's picture
Update README.md
79293b9 verified

Datasets for the Direct Preference for Denoising Diffusion Policy Optimization (D3PO)

Description: This repository contains the dataset for the D3PO method in this paper Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model. The d3po_dataset file pertains to the image distortion experiment of the anything-v5 model. The text2img_dataset comprises the images generated from the pretrained, preferred image fine-tuned, reward weighted fine-tuned and D3PO fine-tuned models in the prompt-image alignment experiment.

Source Code: The code used to generate this data can be found here.

Directory

  • d3po_dataset

    • epoch1
      • all_img
        • *.png
      • deformed_img
        • *.png
      • json
        • data.json (required for training)
      • prompt.json
      • sample.pkl(required for training)
    • epoch2`
    • ...
    • epoch5
  • text2img_dataset:

    • img
    • data_*.json
    • plot.ipynb
    • prompt.txt

Citation

@article{yang2023using,
  title={Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model},
  author={Yang, Kai and Tao, Jian and Lyu, Jiafei and Ge, Chunjiang and Chen, Jiaxin and Li, Qimai and Shen, Weihan and Zhu, Xiaolong and Li, Xiu},
  journal={arXiv preprint arXiv:2311.13231},
  year={2023}
}