jbloom commited on
Commit
54f0900
·
1 Parent(s): d5fc6a6

small updates to the README

Browse files
Files changed (1) hide show
  1. README.md +30 -10
README.md CHANGED
@@ -6,7 +6,7 @@ tags:
6
  - compression
7
  - images
8
  dataset_info:
9
- config_name: tiny
10
  features:
11
  - name: image
12
  dtype:
@@ -18,13 +18,32 @@ dataset_info:
18
  dtype: string
19
  splits:
20
  - name: train
21
- num_bytes: 307620692
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  num_examples: 10
23
  - name: test
24
- num_bytes: 168984694
25
  num_examples: 5
26
  download_size: 238361934
27
- dataset_size: 476605386
28
  ---
29
 
30
  # GBI-16-2D-Legacy Dataset
@@ -36,10 +55,10 @@ GBI-16-2D-Legacy is a Huggingface `dataset` wrapper around a compression dataset
36
  You first need to install the `datasets` and `astropy` packages:
37
 
38
  ```bash
39
- pip install datasets astropy
40
  ```
41
 
42
- There are two datasets: `tiny` and `full`, each with `train` and `test` splits. The `tiny` dataset has 2 4D images in the `train` and 1 in the `test`. The `full` dataset contains all the images in the `data/` directory.
43
 
44
  ## Use from Huggingface Directly
45
 
@@ -60,7 +79,8 @@ Then in your python script:
60
 
61
  ```python
62
  from datasets import load_dataset
63
- dataset = load_dataset("AstroCompress/GBI-16-2D-Legacy", "tiny")
 
64
  ds = dataset.with_format("np")
65
  ```
66
 
@@ -76,15 +96,15 @@ Then `cd SBI-16-3D` and start python like:
76
 
77
  ```python
78
  from datasets import load_dataset
79
- dataset = load_dataset("./GBI-16-2D-Legacy", "tiny", data_dir="./data/")
80
  ds = dataset.with_format("np")
81
  ```
82
 
83
  Now you should be able to use the `ds` variable like:
84
 
85
  ```python
86
- ds["test"][0]["image"].shape # -> (9, 2048, 2048)
87
  ```
88
 
89
- Note of course that it will take a long time to download and convert the images in the local cache for the `full` dataset. Afterward, the usage should be quick as the files are memory-mapped from disk.
90
 
 
6
  - compression
7
  - images
8
  dataset_info:
9
+ - config_name: full
10
  features:
11
  - name: image
12
  dtype:
 
18
  dtype: string
19
  splits:
20
  - name: train
21
+ num_bytes: 3509045373
22
+ num_examples: 120
23
+ - name: test
24
+ num_bytes: 970120060
25
+ num_examples: 32
26
+ download_size: 2240199274
27
+ dataset_size: 4479165433
28
+ - config_name: tiny
29
+ features:
30
+ - name: image
31
+ dtype:
32
+ image:
33
+ mode: I;16
34
+ - name: telescope
35
+ dtype: string
36
+ - name: image_id
37
+ dtype: string
38
+ splits:
39
+ - name: train
40
+ num_bytes: 307620695
41
  num_examples: 10
42
  - name: test
43
+ num_bytes: 168984698
44
  num_examples: 5
45
  download_size: 238361934
46
+ dataset_size: 476605393
47
  ---
48
 
49
  # GBI-16-2D-Legacy Dataset
 
55
  You first need to install the `datasets` and `astropy` packages:
56
 
57
  ```bash
58
+ pip install datasets astropy PIL
59
  ```
60
 
61
+ There are two datasets: `tiny` and `full`, each with `train` and `test` splits. The `tiny` dataset has 5 2D images in the `train` and 1 in the `test`. The `full` dataset contains all the images in the `data/` directory.
62
 
63
  ## Use from Huggingface Directly
64
 
 
79
 
80
  ```python
81
  from datasets import load_dataset
82
+ dataset = load_dataset("AstroCompress/GBI-16-2D-Legacy", "tiny", \
83
+ trust_remote_code=True)
84
  ds = dataset.with_format("np")
85
  ```
86
 
 
96
 
97
  ```python
98
  from datasets import load_dataset
99
+ dataset = load_dataset("./GBI-16-2D-Legacy.py", "tiny", data_dir="./data/")
100
  ds = dataset.with_format("np")
101
  ```
102
 
103
  Now you should be able to use the `ds` variable like:
104
 
105
  ```python
106
+ ds["test"][0]["image"].shape # -> (4200, 2154)
107
  ```
108
 
109
+ Note of course that it will take a long time to download and convert the images in the local cache for the `full` dataset. Afterward, the usage should be quick as the files are memory-mapped from disk. If you run into issues with downloading the `full` dataset, try changing `num_proc` in `load_dataset` to >1 (e.g. 5). You can also set the `writer_batch_size` to ~10-20.
110