DANI / README.md
Renyang's picture
Add text-to-image task category (#2)
870e29f verified
metadata
license: cc-by-nc-4.0
task_categories:
  - image-classification
  - text-to-image
library_name: datasets
tags:
  - ai-generated-content
  - image-quality-assessment
  - real-vs-fake
  - multimodal

DANI: Discrepancy Assessing for Natural and AI Images

Paper: D-Judge: How Far Are We? Evaluating the Discrepancies Between AI-synthesized Images and Natural Images through Multimodal Guidance Code: https://github.com/RenyangLiu/DJudge

A Large-Scale Dataset for Visual Research on AI-Synthesized and Natural Images

Overview

DANI (Discrepancy Assessing for Natural and AI Images) is a large-scale, multimodal dataset for benchmarking and broad visual research on both AI-generated images (AIGIs) and natural images.
The dataset is designed to support a wide range of computer vision and multimodal research tasks, including but not limited to:

  • AI-generated vs. real image discrimination
  • Representation learning
  • Image quality assessment
  • Style transfer
  • Image reconstruction
  • Domain adaptation
  • Multimodal understanding and beyond

DANI accompanies the paper:

Liu, Renyang; Lyu, Ziyu; Zhou, Wei; Ng, See-Kiong.
D-Judge: How Far Are We? Assessing the Discrepancies Between AI-synthesized Images and Natural Images through Multimodal Guidance.
ACM International Conference on Multimedia (MM), 2025.

Dataset Summary

DANI contains over 445,000 images, including 5,000 natural images (from COCO, with resolutions 224, 256, 512, 1024) and more than 440,000 AI-generated images produced by diverse state-of-the-art generative models.
Each sample is annotated with detailed metadata, enabling comprehensive evaluation and flexible use for a broad range of visual and multimodal research. Images are generated using a wide range of generative models and protocols:

  • Models: GALIP, DFGAN, SD_V14, SD_V15, Versatile Diffusion (VD), SD_V21, SD_XL, Dalle2, Dalle3, and COCO (real images)
  • Image Sizes: 224, 256, 512, 768, 1024
  • Generation Types: Text-to-Image (T2I), Image-to-Image (I2I), Text and Image-to-Image (TI2I)
  • Categories: indoor, outdoor, etc.

Data Fields

Each sample in the dataset contains the following fields:

Field Description
index Unique index for each image
image The image itself (as a file, not just path)
size Image resolution (e.g., 224, 256, 512, 768, 1024)
category Scene category (e.g., indoor, outdoor, etc.)
class_id COCO class or semantic category ID/name
model Generative model used (GALIP, DFGAN, SD_V14, SD_V15, VD, etc.)
gen_type Generation method (T2I, I2I, TI2I)
reference Whether it is a real/natural image (True for real, False for generated)

Note:

  • COCO images have reference=True, and may appear at multiple resolutions.
  • For AI-generated images, the model and gen_type fields indicate the specific generative model and generation protocol (T2I, I2I, or TI2I) used for each sample.

Model/Generation Configurations

The dataset covers the following models and settings:

Model Image Size Generation Types Supported
GALIP 224 T2I
DFGAN 256 T2I
SD_V14 512 T2I, I2I, TI2I
SD_V15 512 T2I, I2I, TI2I
VD 512 T2I, I2I, TI2I
SD_V21 768 T2I, I2I, TI2I
SD_XL 1024 T2I, I2I, TI2I
Dalle2 512 T2I, I2I
Dalle3 1024 T2I
COCO 224,256,512,1024 Reference/Real Images

For each generation type (T2I, I2I, TI2I), a diverse set of models are covered.

Usage

You can load DANI directly using the 🤗 datasets library:

from datasets import load_dataset

ds = load_dataset("Renyang/DANI")
print(ds)
# Output: DatasetDict({
#     train: Dataset({
#         features: ['index', 'image', 'size', 'category', 'class_id','model', 'gen_type','reference'],
#         num_rows: 540257
#     })
# })
# Access images and metadata
img = ds["train"][0]["image"]
meta = {k: ds["train"][0][k] for k in ds["train"].column_names if k != "image"}

Note: Images are loaded as PIL Images. Use .convert("RGB") if needed.

Citation

If you use this dataset or the associated benchmark, please cite:

@inproceedings{liu2024djudge,
  title = {D-Judge: How Far Are We? Assessing the Discrepancies Between AI-synthesized Images and Natural Images through Multimodal Guidance},
  author = {Liu, Renyang and Lyu, Ziyu and Zhou, Wei and Ng, See-Kiong},
  booktitle = {ACM International Conference on Multimedia (MM)},
  organization = {ACM},
  year = {2025},
}

License

This dataset is released under the CC BY-NC 4.0 license (for non-commercial research use).

Contact

For questions or collaborations, please visit Renyang Liu's homepage.