Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
mahobley commited on
Commit
63d0a25
Β·
verified Β·
1 Parent(s): dc3f5f3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -0
README.md CHANGED
@@ -75,4 +75,69 @@ configs:
75
  path: data/test-*
76
  - split: train
77
  path: data/train-*
 
78
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
  path: data/test-*
76
  - split: train
77
  path: data/train-*
78
+ license: mit
79
  ---
80
+
81
+ # Multi-Class Class-Agnostic Counting Dataset
82
+ **[Project Page](https://MCAC.active.vision/) |
83
+ [ArXiv](https://arxiv.org/abs/2309.04820) |
84
+ [Download](https://www.robots.ox.ac.uk/~lav/Datasets/MCAC/MCAC.zip)
85
+ **
86
+
87
+ [Michael Hobley](https://scholar.google.co.uk/citations?user=2EftbyIAAAAJ&hl=en),
88
+ [Victor Adrian Prisacariu](http://www.robots.ox.ac.uk/~victor/).
89
+
90
+ [Active Vision Lab (AVL)](https://www.robots.ox.ac.uk/~lav/),
91
+ University of Oxford.
92
+
93
+
94
+ MCAC is the first multi-class class-agnostic counting dataset. each image contains between 1 and 4 classes of
95
+ object and between 1 and 300 objects per class.
96
+ The classes of objects present in the Train, Test and Val splits are mutually exclusive, and where possible
97
+ aligned with the class splits in [FSC-133](https://github.com/ActiveVisionLab/LearningToCountAnything).
98
+ Each object is labeled with an instance, class and model number as well as its center coordinate, bounding box
99
+ coordinates and its percentage occlusion
100
+ Models are taken from [ShapeNetSem]. The original model IDs and manually
101
+ verified category labels are preserved.
102
+ MCAC-M1 is the single-class images from MCAC. This is useful when comparing methods that are not suited to
103
+ multi-class cases.
104
+
105
+ ## File Hierarchy
106
+
107
+ File Hierarchy:
108
+
109
+ β”œβ”€β”€ dataset_pytorch.py
110
+ β”œβ”€β”€ make_gaussian_maps.py
111
+ β”œβ”€β”€ test
112
+ β”œβ”€β”€ train
113
+ β”‚ β”œβ”€β”€ 1511489148409439
114
+ β”‚ β”œβ”€β”€ 3527550462177290
115
+ β”‚ | β”œβ”€β”€img.png
116
+ β”‚ | β”œβ”€β”€info.json
117
+ β”‚ | β”œβ”€β”€seg.png
118
+ β”‚ β”œβ”€β”€4109417696451021
119
+ β”‚ └── ...
120
+ └── val
121
+
122
+
123
+ ## Precompute Density Maps
124
+ To precompute ground truth density maps for other resolutions, occlusion percentages, and gaussian standard deviations use the code from our [GitHub](https://github.com/ActiveVisionLab/MCAC):
125
+
126
+ ```sh
127
+ cd PATH/TO/MCAC/
128
+ python make_gaussian_maps.py --occulsion_limit <desired_max_occlusion> --crop_size 672 --img_size <desired_resolution> --gauss_constant <desired_gaussian_std>;
129
+ ```
130
+ ## Pytorch Dataset
131
+ There is a pytorch dataset written on our [GitHub](https://github.com/ActiveVisionLab/MCAC).
132
+ This randomises the bounding boxes durig training but uses consistent bounding boxes for testing.
133
+
134
+
135
+ ## Citation
136
+ ```
137
+ @article{hobley2023abc,
138
+ title={ABC Easy as 123: A Blind Counter for Exemplar-Free Multi-Class Class-agnostic Counting},
139
+ author={Michael A. Hobley and Victor A. Prisacariu},
140
+ journal={arXiv preprint arXiv:2309.04820},
141
+ year={2023},
142
+ }
143
+ ```