neogpx commited on
Commit
f737aaf
·
verified ·
1 Parent(s): ca8354a

dataset uploaded by roboflow2huggingface package

Browse files
README.dataset.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ # Construction Vehicle Detection > 2023-06-08 10:52pm
2
+ https://universe.roboflow.com/capstone-lkzgq/construction-vehicle-detection-pxc7c
3
+
4
+ Provided by a Roboflow user
5
+ License: CC BY 4.0
6
+
README.md ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - object-detection
4
+ tags:
5
+ - roboflow
6
+ - roboflow2huggingface
7
+
8
+ ---
9
+
10
+ <div align="center">
11
+ <img width="640" alt="neogpx/constructionxc7c" src="https://huggingface.co/datasets/neogpx/constructionxc7c/resolve/main/thumbnail.jpg">
12
+ </div>
13
+
14
+ ### Dataset Labels
15
+
16
+ ```
17
+ ['bulldozer', 'dump truck', 'excavator', 'grader', 'loader', 'mixer truck', 'mobile crane', 'roller']
18
+ ```
19
+
20
+
21
+ ### Number of Images
22
+
23
+ ```json
24
+ {'valid': 1524, 'test': 757, 'train': 16002}
25
+ ```
26
+
27
+
28
+ ### How to Use
29
+
30
+ - Install [datasets](https://pypi.org/project/datasets/):
31
+
32
+ ```bash
33
+ pip install datasets
34
+ ```
35
+
36
+ - Load the dataset:
37
+
38
+ ```python
39
+ from datasets import load_dataset
40
+
41
+ ds = load_dataset("neogpx/constructionxc7c", name="full")
42
+ example = ds['train'][0]
43
+ ```
44
+
45
+ ### Roboflow Dataset Page
46
+ [https://universe.roboflow.com/capstone-lkzgq/construction-vehicle-detection-pxc7c/dataset/2
47
+ ](https://universe.roboflow.com/capstone-lkzgq/construction-vehicle-detection-pxc7c/dataset/2
48
+ ?ref=roboflow2huggingface)
49
+
50
+ ### Citation
51
+
52
+ ```
53
+ @misc{
54
+ construction-vehicle-detection-pxc7c_dataset,
55
+ title = { Construction Vehicle Detection Dataset },
56
+ type = { Open Source Dataset },
57
+ author = { Capstone },
58
+ howpublished = { \\url{ https://universe.roboflow.com/capstone-lkzgq/construction-vehicle-detection-pxc7c } },
59
+ url = { https://universe.roboflow.com/capstone-lkzgq/construction-vehicle-detection-pxc7c },
60
+ journal = { Roboflow Universe },
61
+ publisher = { Roboflow },
62
+ year = { 2023 },
63
+ month = { aug },
64
+ note = { visited on 2025-02-11 },
65
+ }
66
+ ```
67
+
68
+ ### License
69
+ CC BY 4.0
70
+
71
+ ### Dataset Summary
72
+ This dataset was exported via roboflow.com on July 17, 2023 at 7:32 AM GMT
73
+
74
+ Roboflow is an end-to-end computer vision platform that helps you
75
+ * collaborate with your team on computer vision projects
76
+ * collect & organize images
77
+ * understand and search unstructured image data
78
+ * annotate, and create datasets
79
+ * export, train, and deploy computer vision models
80
+ * use active learning to improve your dataset over time
81
+
82
+ For state of the art Computer Vision training notebooks you can use with this dataset,
83
+ visit https://github.com/roboflow/notebooks
84
+
85
+ To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
86
+
87
+ The dataset includes 18283 images.
88
+ Construction-utility-vechicles are annotated in COCO format.
89
+
90
+ The following pre-processing was applied to each image:
91
+ * Auto-orientation of pixel data (with EXIF-orientation stripping)
92
+ * Resize to 640x640 (Stretch)
93
+
94
+ The following augmentation was applied to create 3 versions of each source image:
95
+ * Randomly crop between 0 and 20 percent of the image
96
+ * Random rotation of between -20 and +20 degrees
97
+ * Salt and pepper noise was applied to 5 percent of pixels
98
+
99
+
100
+
README.roboflow.txt ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ Construction Vehicle Detection - v2 2023-06-08 10:52pm
3
+ ==============================
4
+
5
+ This dataset was exported via roboflow.com on July 17, 2023 at 7:32 AM GMT
6
+
7
+ Roboflow is an end-to-end computer vision platform that helps you
8
+ * collaborate with your team on computer vision projects
9
+ * collect & organize images
10
+ * understand and search unstructured image data
11
+ * annotate, and create datasets
12
+ * export, train, and deploy computer vision models
13
+ * use active learning to improve your dataset over time
14
+
15
+ For state of the art Computer Vision training notebooks you can use with this dataset,
16
+ visit https://github.com/roboflow/notebooks
17
+
18
+ To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
19
+
20
+ The dataset includes 18283 images.
21
+ Construction-utility-vechicles are annotated in COCO format.
22
+
23
+ The following pre-processing was applied to each image:
24
+ * Auto-orientation of pixel data (with EXIF-orientation stripping)
25
+ * Resize to 640x640 (Stretch)
26
+
27
+ The following augmentation was applied to create 3 versions of each source image:
28
+ * Randomly crop between 0 and 20 percent of the image
29
+ * Random rotation of between -20 and +20 degrees
30
+ * Salt and pepper noise was applied to 5 percent of pixels
31
+
32
+
constructionxc7c.py ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import collections
2
+ import json
3
+ import os
4
+
5
+ import datasets
6
+
7
+
8
+ _HOMEPAGE = "https://universe.roboflow.com/capstone-lkzgq/construction-vehicle-detection-pxc7c/dataset/2
9
+ "
10
+ _LICENSE = "CC BY 4.0"
11
+ _CITATION = """\
12
+ @misc{
13
+ construction-vehicle-detection-pxc7c_dataset,
14
+ title = { Construction Vehicle Detection Dataset },
15
+ type = { Open Source Dataset },
16
+ author = { Capstone },
17
+ howpublished = { \\url{ https://universe.roboflow.com/capstone-lkzgq/construction-vehicle-detection-pxc7c } },
18
+ url = { https://universe.roboflow.com/capstone-lkzgq/construction-vehicle-detection-pxc7c },
19
+ journal = { Roboflow Universe },
20
+ publisher = { Roboflow },
21
+ year = { 2023 },
22
+ month = { aug },
23
+ note = { visited on 2025-02-11 },
24
+ }
25
+ """
26
+ _CATEGORIES = ['bulldozer', 'dump truck', 'excavator', 'grader', 'loader', 'mixer truck', 'mobile crane', 'roller']
27
+ _ANNOTATION_FILENAME = "_annotations.coco.json"
28
+
29
+
30
+ class CONSTRUCTIONXC7CConfig(datasets.BuilderConfig):
31
+ """Builder Config for constructionxc7c"""
32
+
33
+ def __init__(self, data_urls, **kwargs):
34
+ """
35
+ BuilderConfig for constructionxc7c.
36
+
37
+ Args:
38
+ data_urls: `dict`, name to url to download the zip file from.
39
+ **kwargs: keyword arguments forwarded to super.
40
+ """
41
+ super(CONSTRUCTIONXC7CConfig, self).__init__(version=datasets.Version("1.0.0"), **kwargs)
42
+ self.data_urls = data_urls
43
+
44
+
45
+ class CONSTRUCTIONXC7C(datasets.GeneratorBasedBuilder):
46
+ """constructionxc7c object detection dataset"""
47
+
48
+ VERSION = datasets.Version("1.0.0")
49
+ BUILDER_CONFIGS = [
50
+ CONSTRUCTIONXC7CConfig(
51
+ name="full",
52
+ description="Full version of constructionxc7c dataset.",
53
+ data_urls={
54
+ "train": "https://huggingface.co/datasets/neogpx/constructionxc7c/resolve/main/data/train.zip",
55
+ "validation": "https://huggingface.co/datasets/neogpx/constructionxc7c/resolve/main/data/valid.zip",
56
+ "test": "https://huggingface.co/datasets/neogpx/constructionxc7c/resolve/main/data/test.zip",
57
+ },
58
+ ),
59
+ CONSTRUCTIONXC7CConfig(
60
+ name="mini",
61
+ description="Mini version of constructionxc7c dataset.",
62
+ data_urls={
63
+ "train": "https://huggingface.co/datasets/neogpx/constructionxc7c/resolve/main/data/valid-mini.zip",
64
+ "validation": "https://huggingface.co/datasets/neogpx/constructionxc7c/resolve/main/data/valid-mini.zip",
65
+ "test": "https://huggingface.co/datasets/neogpx/constructionxc7c/resolve/main/data/valid-mini.zip",
66
+ },
67
+ )
68
+ ]
69
+
70
+ def _info(self):
71
+ features = datasets.Features(
72
+ {
73
+ "image_id": datasets.Value("int64"),
74
+ "image": datasets.Image(),
75
+ "width": datasets.Value("int32"),
76
+ "height": datasets.Value("int32"),
77
+ "objects": datasets.Sequence(
78
+ {
79
+ "id": datasets.Value("int64"),
80
+ "area": datasets.Value("int64"),
81
+ "bbox": datasets.Sequence(datasets.Value("float32"), length=4),
82
+ "category": datasets.ClassLabel(names=_CATEGORIES),
83
+ }
84
+ ),
85
+ }
86
+ )
87
+ return datasets.DatasetInfo(
88
+ features=features,
89
+ homepage=_HOMEPAGE,
90
+ citation=_CITATION,
91
+ license=_LICENSE,
92
+ )
93
+
94
+ def _split_generators(self, dl_manager):
95
+ data_files = dl_manager.download_and_extract(self.config.data_urls)
96
+ return [
97
+ datasets.SplitGenerator(
98
+ name=datasets.Split.TRAIN,
99
+ gen_kwargs={
100
+ "folder_dir": data_files["train"],
101
+ },
102
+ ),
103
+ datasets.SplitGenerator(
104
+ name=datasets.Split.VALIDATION,
105
+ gen_kwargs={
106
+ "folder_dir": data_files["validation"],
107
+ },
108
+ ),
109
+ datasets.SplitGenerator(
110
+ name=datasets.Split.TEST,
111
+ gen_kwargs={
112
+ "folder_dir": data_files["test"],
113
+ },
114
+ ),
115
+ ]
116
+
117
+ def _generate_examples(self, folder_dir):
118
+ def process_annot(annot, category_id_to_category):
119
+ return {
120
+ "id": annot["id"],
121
+ "area": annot["area"],
122
+ "bbox": annot["bbox"],
123
+ "category": category_id_to_category[annot["category_id"]],
124
+ }
125
+
126
+ image_id_to_image = {}
127
+ idx = 0
128
+
129
+ annotation_filepath = os.path.join(folder_dir, _ANNOTATION_FILENAME)
130
+ with open(annotation_filepath, "r") as f:
131
+ annotations = json.load(f)
132
+ category_id_to_category = {category["id"]: category["name"] for category in annotations["categories"]}
133
+ image_id_to_annotations = collections.defaultdict(list)
134
+ for annot in annotations["annotations"]:
135
+ image_id_to_annotations[annot["image_id"]].append(annot)
136
+ filename_to_image = {image["file_name"]: image for image in annotations["images"]}
137
+
138
+ for filename in os.listdir(folder_dir):
139
+ filepath = os.path.join(folder_dir, filename)
140
+ if filename in filename_to_image:
141
+ image = filename_to_image[filename]
142
+ objects = [
143
+ process_annot(annot, category_id_to_category) for annot in image_id_to_annotations[image["id"]]
144
+ ]
145
+ with open(filepath, "rb") as f:
146
+ image_bytes = f.read()
147
+ yield idx, {
148
+ "image_id": image["id"],
149
+ "image": {"path": filepath, "bytes": image_bytes},
150
+ "width": image["width"],
151
+ "height": image["height"],
152
+ "objects": objects,
153
+ }
154
+ idx += 1
data/test.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b02bb832dd77f0fd0b0f44bce359ad61555492931015f236da52f01cbc08d161
3
+ size 43626301
data/train.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1bbfcc5c0e0cf4c38d1b8732b37d847607f2894e8ffb0683511b59d85dd9672f
3
+ size 1581527981
data/valid-mini.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1e2c76e426d69a6b3067df526d638939464c94b50d93f1f13c78e7218ff513c
3
+ size 186038
data/valid.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb36786c6412a69755f18f0254ae04599867a9632a0ba465339411fc5cb63baa
3
+ size 88041968
split_name_to_num_samples.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"valid": 1524, "test": 757, "train": 16002}
thumbnail.jpg ADDED

Git LFS Details

  • SHA256: 69f09fd6ea24aee1fefc1ece238ac60d79bd885de0e9287747351eefd3831656
  • Pointer size: 131 Bytes
  • Size of remote file: 162 kB