Datasets:

Modalities:
Image
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
clcifar10 / README.md
yahcreeper's picture
Update README.md
bf9010b verified
metadata
license: mit
dataset_info:
  features:
    - name: images
      dtype: image
    - name: ord_labels
      dtype:
        class_label:
          names:
            '0': airplane
            '1': automobile
            '2': bird
            '3': cat
            '4': deer
            '5': dog
            '6': frog
            '7': horse
            '8': ship
            '9': truck
    - name: cl_labels
      sequence:
        class_label:
          names:
            '0': airplane
            '1': automobile
            '2': bird
            '3': cat
            '4': deer
            '5': dog
            '6': frog
            '7': horse
            '8': ship
            '9': truck
  splits:
    - name: train
      num_bytes: 115048310
      num_examples: 50000
  download_size: 117804187
  dataset_size: 115048310
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Dataset Card for CLCIFAR10

This Complementary labeled CIFAR10 dataset contains 3 human-annotated complementary labels for all 50000 images in the training split of CIFAR10. The workers are from Amazon Mechanical Turk. We randomly sampled 4 different labels for 3 different annotators, so each image would have 3 (probably repeated) complementary labels.

For more details, please visit our github or paper.

Dataset Structure

Data Instances

A sample from the training set is provided below:

{
    'images': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x799538D3D5A0>, 
    'ord_labels': 6, 
    'cl_labels': [3, 9, 6]
}

Data Fields

  • images: A PIL.Image.Image object containing the 32x32 image.

  • ord_labels: The ordinary labels of the images, and they are labeled from 0 to 9 as follows:

    0: airplane 1: automobile 2: bird 3: cat 4: deer 5: dog 6: frog 7: horse 8: ship 9: truck

  • cl_labels: Three complementary labels for each image from three different workers.

Annotation Task Design and Deployment on Amazon MTurk

To collect human-annotated labels, we used Amazon Mechanical Turk (MTurk) to deploy our annotation task. The layout and interface design for the MTurk task can be found in the file design-layout-mturk.html.

In each task, a single image was enlarged to 200 x 200 for clarity and presented alongside the question: Choose any one "incorrect" label for this image? Annotators were given four example labels to choose from (e.g., dog, cat, ship, bird), and were instructed to select the one that does not correctly describe the image.

Citing

If you find this dataset useful, please cite the following:

@article{
  wang2024climage,
  title={{CLI}mage: Human-Annotated Datasets for Complementary-Label Learning},
  author={Hsiu-Hsuan Wang and Mai Tan Ha and Nai-Xuan Ye and Wei-I Lin and Hsuan-Tien Lin},
  journal={Transactions on Machine Learning Research},
  issn={2835-8856},
  year={2025}
}