File size: 2,972 Bytes
58d1f3d 41bd6a9 58d1f3d 6bc33fa 58d1f3d c9babb9 58d1f3d 6bc33fa 58d1f3d 6bc33fa 58d1f3d c9babb9 6bc33fa 58d1f3d f4c770f 70edb3c bf9010b 70edb3c 3fa5fdb 70edb3c 3fa5fdb 70edb3c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 |
---
license: mit
dataset_info:
features:
- name: images
dtype: image
- name: ord_labels
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
- name: cl_labels
sequence:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 115048310.0
num_examples: 50000
download_size: 117804187
dataset_size: 115048310.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
## Dataset Card for CLCIFAR10
This Complementary labeled CIFAR10 dataset contains 3 human-annotated complementary labels for all 50000 images in the training split of CIFAR10. The workers are from [Amazon Mechanical Turk]([https://www.mturk.com](https://www.mturk.com/)). We randomly sampled 4 different labels for 3 different annotators, so each image would have 3 (probably repeated) complementary labels.
For more details, please visit our [github](https://github.com/ntucllab/CLImage_Dataset) or [paper](https://arxiv.org/abs/2305.08295).
### Dataset Structure
#### Data Instances
A sample from the training set is provided below:
```
{
'images': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x799538D3D5A0>,
'ord_labels': 6,
'cl_labels': [3, 9, 6]
}
```
#### Data Fields
- `images`: A `PIL.Image.Image` object containing the 32x32 image.
- `ord_labels`: The ordinary labels of the images, and they are labeled from 0 to 9 as follows:
0: airplane 1: automobile 2: bird 3: cat 4: deer 5: dog 6: frog 7: horse 8: ship 9: truck
- `cl_labels`: Three complementary labels for each image from three different workers.
## Annotation Task Design and Deployment on Amazon MTurk
To collect human-annotated labels, we used Amazon Mechanical Turk (MTurk) to deploy our annotation task. The layout and interface design for the MTurk task can be found in the file `design-layout-mturk.html`.
In each task, a single image was enlarged to 200 x 200 for clarity and presented alongside the question: `Choose any one "incorrect" label for this image`? Annotators were given four example labels to choose from (e.g., `dog, cat, ship, bird`), and were instructed to select the one that does not correctly describe the image.
## Citing
If you find this dataset useful, please cite the following:
```
@article{
wang2024climage,
title={{CLI}mage: Human-Annotated Datasets for Complementary-Label Learning},
author={Hsiu-Hsuan Wang and Mai Tan Ha and Nai-Xuan Ye and Wei-I Lin and Hsuan-Tien Lin},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2025}
}
``` |