iulia-elisa commited on
Commit
34d2a85
·
verified ·
1 Parent(s): 33871a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +147 -3
README.md CHANGED
@@ -1,3 +1,147 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ size_categories:
3
+ - 1K<N<10K
4
+ source_datasets:
5
+ - original
6
+ task_categories:
7
+ - image-segmentation
8
+ task_ids:
9
+ - instance-segmentation
10
+ pretty_name: XAMI-dataset
11
+ tags:
12
+ - COCO format
13
+ - Astronomy
14
+ - XMM-Newton
15
+ configs:
16
+ - config_name: default
17
+ data_files:
18
+ - split: train
19
+ path: data/train-*
20
+ - split: validation
21
+ path: data/valid-*
22
+ dataset_info:
23
+ features:
24
+ - name: observation id
25
+ dtype: string
26
+ - name: segmentation
27
+ dtype: image
28
+ - name: bbox
29
+ dtype: image
30
+ - name: label
31
+ dtype: string
32
+ - name: area
33
+ dtype: string
34
+ - name: image shape
35
+ dtype: string
36
+ splits:
37
+ - name: train
38
+ num_bytes: 66654394.0
39
+ num_examples: 105
40
+ - name: validation
41
+ num_bytes: 74471782.0
42
+ num_examples: 126
43
+ download_size: 141102679
44
+ dataset_size: 141126176.0
45
+ ---
46
+
47
+ # XAMI-dataset
48
+
49
+
50
+ *The Git repository for this dataset can be found [here]*(https://github.com/ESA-Datalabs/XAMI-Dataset).
51
+
52
+ The XAMI dataset contains 1000 annotated images of observations from diverse sky regions of the XMM-Newton Optical Monitor (XMM-OM) image catalog. An additional 50 images with no annotations are included to help decrease the amount of False Positives or Negatives that may be caused by complex objects (e.g., large galaxies, clusters, nebulae).
53
+ <!--
54
+ The XMM-Newton Optical Monitor (XMM-OM) image catalog holds about 9 million detections of nearly 6 million distinct sources. It is crucial for analyzing individual objects and contributes significantly to survey science. However, source flagging in the XMM-OM data needs improvement due to artefacts that can cause false detections and affect the precision of real sources, thie being a common issue for other space missions. -->
55
+
56
+ ### Artefacts
57
+
58
+ A particularity of our XAMI dataset compared to every-day images datasets are the locations where artefacts usually appear.
59
+ <img src="https://huggingface.co/datasets/iulia-elisa/XAMI-dataset/resolve/main/plots/artefact_distributions.png" alt="Examples of an image with multiple artefacts." />
60
+
61
+ Here are some examples of common artefacts:
62
+
63
+ <img src="https://huggingface.co/datasets/iulia-elisa/XAMI-dataset/resolve/main/plots/artefacts_examples.png" alt="Examples of common artefacts in the OM observations." width="400"/>
64
+
65
+ # Annotation platforms
66
+
67
+ The dataset images have been annotated using the following project:
68
+
69
+ - [Zooniverse project](https://www.zooniverse.org/projects/ori-j/ai-for-artefacts-in-sky-images), where the resulted annotations are not externally visible.
70
+ - [Roboflow project](https://universe.roboflow.com/iuliaelisa/xmm_om_artefacts_512/), which allows for more interactive and visual annotation projects.
71
+
72
+ # The dataset format
73
+ The XAMI dataset is splited into train and validation categories and contains annotated artefacts in COCO format for Instance Segmentation. We use multilabel Stratified K-fold technique (**k=4**) to balance class distributions across training and validation splits. We choose to work with a single dataset splits version (out of 4), but also provide means to train all 4 versions.
74
+
75
+ A more detailed structure of our dataset in COCO and YOLOformat can be found in [Dataset Structure](Datasets-Structure.md).
76
+
77
+ # Downloading the dataset
78
+
79
+ The dataset repository on can be found on [HuggingFace](https://huggingface.co/datasets/iulia-elisa/XAMI-dataset) and [Github](https://github.com/IuliaElisa/XAMI-dataset).
80
+
81
+ ### Downloading the dataset archive from HuggingFace:
82
+
83
+ ```python
84
+ from huggingface_hub import hf_hub_download
85
+ import pandas as pd
86
+
87
+ dataset_name = 'dataset_archive' # the dataset name of Huggingface
88
+ images_dir = '.' # the output directory of the dataset images
89
+ annotations_path = os.path.join(images_dir, dataset_name, '_annotations.coco.json')
90
+
91
+ for filename in [dataset_name, utils_filename]:
92
+ hf_hub_download(
93
+ repo_id="iulia-elisa/XAMI-dataset", # the Huggingface repo ID
94
+ repo_type='dataset',
95
+ filename=filename,
96
+ local_dir=images_dir
97
+ );
98
+
99
+ # Unzip file
100
+ !unzip "dataset_archive.zip"
101
+
102
+ # Read the json annotations file
103
+ with open(annotations_path) as f:
104
+ data_in = json.load(f)
105
+ ```
106
+ or
107
+
108
+ ```
109
+ - using a CLI command:
110
+ ```bash
111
+ huggingface-cli download iulia-elisa/XAMI-dataset dataset_archive.zip --repo-type dataset --local-dir '/path/to/local/dataset/dir'
112
+ ```
113
+
114
+ ### Cloning the repository for more visualization tools
115
+
116
+ <!-- The dataset can be generated to match our baseline (this is helpful for recreating dataset and model results). -->
117
+
118
+ Clone the repository locally:
119
+
120
+ ```bash
121
+ # Github
122
+ git clone https://github.com/IuliaElisa/XAMI-dataset.git
123
+ cd XAMI-dataset
124
+ ```
125
+ or
126
+ ```bash
127
+ # HuggingFace
128
+ git clone https://huggingface.co/datasets/iulia-elisa/XAMI-dataset.git
129
+ cd XAMI-dataset
130
+ ```
131
+
132
+ # Dataset Split with SKF (Optional)
133
+
134
+ - The below method allows for dataset splitting, using the pre-generated splits in CSV files. This step is useful when training multiple dataset splits versions to gain mor generalised view on metrics.
135
+ ```python
136
+ import utils
137
+
138
+ # run multilabel SKF split with the standard k=4
139
+ csv_files = ['mskf_0.csv', 'mskf_1.csv', 'mskf_2.csv', 'mskf_3.csv']
140
+
141
+ for idx, csv_file in enumerate(csv_files):
142
+ mskf = pd.read_csv(csv_file)
143
+ utils.create_directories_and_copy_files(images_dir, data_in, mskf, idx)
144
+ ```
145
+
146
+ ## Licence
147
+ ...