--- dataset_info: features: - name: id dtype: string - name: split dtype: string - name: image dtype: image - name: seg dtype: image - name: num_classes dtype: int64 - name: inds sequence: sequence: int64 - name: ids sequence: string - name: classes sequence: string - name: sizes sequence: sequence: float64 - name: centers sequence: sequence: sequence: float64 - name: occlusions sequence: sequence: int64 - name: area sequence: sequence: int64 - name: unoccluded_area sequence: sequence: int64 - name: bboxes sequence: sequence: sequence: sequence: int64 - name: centers_crop672 sequence: sequence: sequence: float64 - name: occlusions_crop672 sequence: sequence: int64 - name: area_crop672 sequence: sequence: int64 - name: bboxes_crop672 sequence: sequence: sequence: sequence: int64 splits: - name: val num_bytes: 2755568627.75 num_examples: 2413 - name: test num_bytes: 2361310977.25 num_examples: 2115 - name: train num_bytes: 5265670004.75 num_examples: 4757 download_size: 10255201099 dataset_size: 10382549609.75 configs: - config_name: default data_files: - split: val path: data/val-* - split: test path: data/test-* - split: train path: data/train-* license: mit --- # Multi-Class Class-Agnostic Counting Dataset **[Project Page](https://MCAC.active.vision/) | [ArXiv](https://arxiv.org/abs/2309.04820) | [Download](https://www.robots.ox.ac.uk/~lav/Datasets/MCAC/MCAC.zip) ** [Michael Hobley](https://scholar.google.co.uk/citations?user=2EftbyIAAAAJ&hl=en), [Victor Adrian Prisacariu](http://www.robots.ox.ac.uk/~victor/). [Active Vision Lab (AVL)](https://www.robots.ox.ac.uk/~lav/), University of Oxford. MCAC is the first multi-class class-agnostic counting dataset. each image contains between 1 and 4 classes of object and between 1 and 300 objects per class. The classes of objects present in the Train, Test and Val splits are mutually exclusive, and where possible aligned with the class splits in [FSC-133](https://github.com/ActiveVisionLab/LearningToCountAnything). Each object is labeled with an instance, class and model number as well as its center coordinate, bounding box coordinates and its percentage occlusion Models are taken from [ShapeNetSem]. The original model IDs and manually verified category labels are preserved. MCAC-M1 is the single-class images from MCAC. This is useful when comparing methods that are not suited to multi-class cases. ## File Hierarchy File Hierarchy: ├── dataset_pytorch.py ├── make_gaussian_maps.py ├── test ├── train │ ├── 1511489148409439 │ ├── 3527550462177290 │ | ├──img.png │ | ├──info.json │ | ├──seg.png │ ├──4109417696451021 │ └── ... └── val ## Precompute Density Maps To precompute ground truth density maps for other resolutions, occlusion percentages, and gaussian standard deviations use the code from our [GitHub](https://github.com/ActiveVisionLab/MCAC): ```sh cd PATH/TO/MCAC/ python make_gaussian_maps.py --occulsion_limit --crop_size 672 --img_size --gauss_constant ; ``` ## Pytorch Dataset There is a pytorch dataset written on our [GitHub](https://github.com/ActiveVisionLab/MCAC). This randomises the bounding boxes durig training but uses consistent bounding boxes for testing. ## Citation ``` @article{hobley2023abc, title={ABC Easy as 123: A Blind Counter for Exemplar-Free Multi-Class Class-agnostic Counting}, author={Michael A. Hobley and Victor A. Prisacariu}, journal={arXiv preprint arXiv:2309.04820}, year={2023}, } ```