cssprad1 commited on
Commit
55c9ef4
·
verified ·
1 Parent(s): ea7ff4b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +669 -3
README.md CHANGED
@@ -1,3 +1,669 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ # <b> SatVision-TOA </b>
6
+
7
+ |Name|Pretrain|Resolution|Channels | Parameters|
8
+ |---|---|---|---|---|
9
+ |SatVision-TOA-GIANT|MODIS-TOA-100-M|128x128|14|3B|
10
+
11
+ ## Accessing the Model
12
+
13
+ Model Repository: [HuggingFace](https://huggingface.co/nasa-cisto-data-science-group/satvision-toa-giant-patch8-window8-128)
14
+
15
+ ### **Clone the Model Checkpoint**
16
+
17
+ 1. Load `git-lfs`:
18
+ ```bash
19
+ module load git-lfs
20
+ ```
21
+ ```bash
22
+ git lfs install
23
+ ```
24
+
25
+ 2. Clone the repository:
26
+ ```bash
27
+ git clone [email protected]:nasa-cisto-data-science-group/satvision-toa-giant-patch8-window8-128
28
+ ```
29
+
30
+ <b> Note: Using SSH authentication </b>
31
+
32
+ Ensure SSH keys are configured. Troubleshooting steps:
33
+ - Check SSH connection:
34
+ ```bash
35
+ ssh -T [email protected] # If reports back as anonymous follow the next steps
36
+ ```
37
+ - Add your SSH key:
38
+ ```bash
39
+ eval $(ssh-agent)
40
+ ssh-add ~/.ssh/your-key # Path to your SSH key
41
+ ```
42
+
43
+ ## <b> Running SatVision-TOA Pipelines </b>
44
+
45
+ ### <b> Command-Line Interface (CLI) </b>
46
+
47
+ To run tasks with **PyTorch-Caney**, use the following command:
48
+
49
+ ```bash
50
+ $ python pytorch-caney/pytorch_caney/ptc_cli.py --config-path <Path to config file>
51
+ ```
52
+
53
+ ### <b> Common CLI Arguments </b>
54
+ | Command-line-argument | Description |Required/Optional/Flag | Default | Example |
55
+ | --------------------- |:----------------------------------------------------|:---------|:---------|:--------------------------------------|
56
+ | `-config-path` | Path to training config | Required | N/A |`--config-path pytorch-caney/configs/3dcloudtask_swinv2_satvision_gaint_test.yaml` |
57
+ | `-h, --help` | show this help message and exit | Optional | N/a |`--help`, `-h` |
58
+
59
+
60
+ ### <b> Examples </b>
61
+
62
+ **Run 3D Cloud Task with Pretrained Model**:
63
+ ```shell
64
+ $ python pytorch-caney/pytorch_caney/ptc_cli.py --config-path pytorch-caney/configs/3dcloudtask_swinv2_satvision_giant_test.yaml
65
+ ```
66
+ **Run 3D Cloud Task with baseline model**:
67
+ ```shell
68
+ $ python pytorch-caney/pytorch_caney/ptc_cli.py --config-path pytorch-caney/configs/3dcloudtask_fcn_baseline_test.yaml
69
+ ```
70
+
71
+ **Run SatVision-TOA Pretraining from Scratch**:
72
+ ```shell
73
+ $ python pytorch-caney/pytorch_caney/ptc_cli.py --config-path pytorch-caney/configs/mim_pretrain_swinv2_satvision_giant_128_onecycle_100ep.yaml
74
+ ```
75
+
76
+ ## **3. Using Singularity for Containerized Execution**
77
+
78
+ **Shell Access**
79
+
80
+ ```bash
81
+ $ singularity shell --nv -B <DRIVE-TO-MOUNT-0> <PATH-TO-CONTAINER>
82
+ Singularity> export PYTHONPATH=$PWD:$PWD/pytorch-caney
83
+ ```
84
+
85
+ **Command Execution**
86
+ ```bash
87
+ $ singularity exec --nv -B <DRIVE-TO-MOUNT-0>,<DRIVE-TO-MOUNT-1> --env PYTHONPATH=$PWD:$PWD/pytorch-caney <PATH-TO-CONTAINER> COMMAND
88
+ ```
89
+
90
+ ### **Example**
91
+
92
+ Running the 3D Cloud Task inside the container:
93
+
94
+ ```bash
95
+ $ singularity shell --nv -B <DRIVE-TO-MOUNT-0> <PATH-TO-CONTAINER>
96
+ Singularity> export PYTHONPATH=$PWD:$PWD/pytorch-caney
97
+ Singularity> python pytorch-caney/pytorch_caney/ptc_cli.py --config-path pytorch-caney/configs/3dcloudtask_swinv2_satvision_giant_test.yaml
98
+ ```
99
+
100
+ ---
101
+
102
+ ## <b> 4. ThreeDCloudTask Pipeline </b>
103
+
104
+ This document describes how to run the `ThreeDCloudTask` pipeline using the provided configuration files and PyTorch Lightning setup. This requires downloading the 3D Cloud dataset from HuggingFace.
105
+
106
+ ## Pipeline Overview
107
+
108
+ The `ThreeDCloudTask` is a PyTorch Lightning module designed for regression tasks predicting a 3D cloud vertical structure. The pipeline is configurable through YAML files and leverages custom components for the encoder, decoder, loss functions, and metrics.
109
+
110
+ ## Running the Pipeline
111
+
112
+ Follow the steps below to train or validate the `ThreeDCloudTask` pipeline.
113
+
114
+ ### Prepare Configuration
115
+
116
+ Two example configuration files are provided:
117
+
118
+ - `3dcloudtask_swinv2_satvision_gaint_test.yaml`: Configures a pipeline using the SwinV2-based SatVision encoder.
119
+ - `3dcloudtask_fcn_baseline_test.yaml`: Configures a baseline pipeline with a fully convolutional network (FCN).
120
+
121
+ Modify the configuration file to suit your dataset and training parameters.
122
+
123
+ ### Run the Training Script
124
+
125
+ Example:
126
+ ```bash
127
+ $ singularity shell --nv -B <DRIVE-TO-MOUNT-0> <PATH-TO-CONTAINER>
128
+ Singularity> export PYTHONPATH=$PWD:$PWD/pytorch-caney
129
+ Singularity> python python pytorch-caney/pytorch_caney/ptc_cli.py --config-path pytorch-caney/configs/3dcloudtask_swinv2_satvision_giant_test.yaml
130
+ ```
131
+
132
+ ### Script Behavior
133
+
134
+ - **Pipeline Initialization**: The script initializes the pipeline using the `PIPELINES` registry, based on the `PIPELINE`value in the configuration file.
135
+ - **Model and Data Module Setup**: The script automatically detects and uses the appropriate `DATAMODULE` and `MODEL` components specified in the configuration.
136
+ - **Training Strategy**: The `get_strategy` function selects the optimal training strategy, including distributed training if applicable.
137
+ - **Checkpoints**: If a checkpoint path is provided in the configuration (`MODEL.RESUME`), training resumes from that checkpoint.
138
+ ### Output
139
+
140
+ The results, logs, and model checkpoints are saved in a directory specified by:
141
+ `<output-dir>/<model-name>/<tag>/`
142
+
143
+ Example:
144
+
145
+ `./outputs/3dcloud-svtoa-finetune-giant/3dcloud_task_swinv2_g_satvision_128_scaled_bt_minmax/`
146
+
147
+ ### Configuration Details
148
+
149
+ #### 3D Cloud Task Example Configurations
150
+
151
+ ```yaml
152
+ PIPELINE: '3dcloud'
153
+ DATAMODULE: 'abitoa3dcloud'
154
+ MODEL:
155
+ ENCODER: 'satvision'
156
+ DECODER: 'fcn'
157
+ PRETRAINED: satvision-toa-giant-patch8-window8-128/mp_rank_00_model_states.pt
158
+ TYPE: swinv2
159
+ NAME: 3dcloud-svtoa-finetune-giant
160
+ IN_CHANS: 14
161
+ DROP_PATH_RATE: 0.1
162
+ SWINV2:
163
+ IN_CHANS: 14
164
+ EMBED_DIM: 512
165
+ DEPTHS: [ 2, 2, 42, 2 ]
166
+ NUM_HEADS: [ 16, 32, 64, 128 ]
167
+ WINDOW_SIZE: 8
168
+ NORM_PERIOD: 6
169
+ DATA:
170
+ BATCH_SIZE: 32
171
+ DATA_PATHS: [/explore/nobackup/projects/ilab/data/satvision-toa/3dcloud.data/abiChipsNew/]
172
+ TEST_DATA_PATHS: [/explore/nobackup/projects/ilab/data/satvision-toa/3dcloud.data/abiChipsNew/]
173
+ IMG_SIZE: 128
174
+ TRAIN:
175
+ USE_CHECKPOINT: True
176
+ EPOCHS: 50
177
+ WARMUP_EPOCHS: 10
178
+ BASE_LR: 3e-4
179
+ MIN_LR: 2e-4
180
+ WARMUP_LR: 1e-4
181
+ WEIGHT_DECAY: 0.05
182
+ LR_SCHEDULER:
183
+ NAME: 'multistep'
184
+ GAMMA: 0.1
185
+ MULTISTEPS: [700,]
186
+ LOSS:
187
+ NAME: 'bce'
188
+ PRECISION: 'bf16'
189
+ PRINT_FREQ: 10
190
+ SAVE_FREQ: 50
191
+ VALIDATION_FREQ: 20
192
+ TAG: 3dcloud_task_swinv2_g_satvision_128_scaled_bt_minmax
193
+ ```
194
+
195
+ #### FCN Baseline Configuration
196
+
197
+ ```yaml
198
+ PIPELINE: '3dcloud'
199
+ DATAMODULE: 'abitoa3dcloud'
200
+ MODEL:
201
+ ENCODER: 'fcn'
202
+ DECODER: 'fcn'
203
+ NAME: 3dcloud-fcn-baseline
204
+ IN_CHANS: 14
205
+ DROP_PATH_RATE: 0.1
206
+ DATA:
207
+ BATCH_SIZE: 32
208
+ DATA_PATHS: [/explore/nobackup/projects/ilab/data/satvision-toa/3dcloud.data/abiChipsNew/]
209
+ TEST_DATA_PATHS: [/explore/nobackup/projects/ilab/data/satvision-toa/3dcloud.data/abiChipsNew/]
210
+ IMG_SIZE: 128
211
+ TRAIN:
212
+ ACCELERATOR: 'gpu'
213
+ STRATEGY: 'auto'
214
+ EPOCHS: 50
215
+ WARMUP_EPOCHS: 10
216
+ BASE_LR: 3e-4
217
+ MIN_LR: 2e-4
218
+ WARMUP_LR: 1e-4
219
+ WEIGHT_DECAY: 0.05
220
+ LR_SCHEDULER:
221
+ NAME: 'multistep'
222
+ GAMMA: 0.1
223
+ MULTISTEPS: [700,]
224
+ LOSS:
225
+ NAME: 'bce'
226
+ PRINT_FREQ: 10
227
+ SAVE_FREQ: 50
228
+ VALIDATION_FREQ: 20
229
+ TAG: 3dcloud_task_fcn_baseline_128_scaled_bt_minmax
230
+ ```
231
+
232
+ ### Key Components
233
+
234
+ #### Model Components
235
+
236
+ - **Encoder**: Handles feature extraction from input data.
237
+ - **Decoder**: Processes features into an intermediate representation.
238
+ - **Segmentation Head**: Produces the final output with a specific shape (91x40).
239
+
240
+ #### Loss Function
241
+
242
+ - **Binary Cross-Entropy Loss (`bce`)** is used for training.
243
+
244
+ #### Metrics
245
+
246
+ - **Jaccard Index (IoU)**: Evaluates model accuracy.
247
+ - **Mean Loss**: Tracks average loss during training and validation.
248
+
249
+ #### Optimizer
250
+
251
+ - Custom optimizer configurations are handled by `build_optimizer`.
252
+
253
+ ### Additional Notes
254
+
255
+ - Customize your `DATAMODULE` and `MODEL` definitions as per the dataset and task requirements.
256
+ - To run the pipeline with GPUs, ensure your system has compatible hardware and CUDA installed.
257
+
258
+ ---
259
+
260
+ ## <b> Masked-Image-Modeling Pre-Training Pipeline </b>
261
+
262
+ ---
263
+
264
+ For an example of how MiM pre-trained models work, see the example inference notebook in `pytorch-caney/notebooks/`
265
+
266
+ # <b> SatVision-TOA Model Input Data Generation and Pre-processing </b>
267
+
268
+ ---
269
+
270
+ ## Overview 
271
+
272
+ - For expected model input see "Expected Model Input" section
273
+ - For steps taken for generating the MODIS-TOA pre-training dataset see "MODIS-TOA Dataset Generation" section
274
+
275
+ ![MODIS TOA Bands](docs/static/modis_toa_bands.png)
276
+
277
+ ## MODIS-TOA Dataset Generation
278
+
279
+ The MODIS TOA dataset is derived from MODIS MOD02 Level 1B swaths, which provide calibrated and geolocated irradiances across 36 spectral bands. The data processing pipeline involves compositing, calibration, and normalization steps to convert raw data into a format suitable for deep learning model ingestion. 
280
+
281
+ MODIS data comes in three spatial resolutions 250 m, 500 m and 1 km where bands 1 and 2 are natively 250 m, bands 3 – 7 are natively 500 m and bands 8 – 36 are natively 1 km.  For this work all bands need to be at the same spatial resolution so the finer resolution bands 1 – 7 have been aggregated to 1 km.
282
+
283
+ The initial step involves compositing MODIS 5-minute swaths into daily global composites at 1 km spatial resolution. This step consolidates continuous swath data into a consistent daily global grid. 
284
+
285
+ The SatVision TOA model is pre-trained on 14 MODIS band L1B Top-Of-Atmosphere (TOA) irradiance imageries. Bands were selected based on which ones were most similar to spectral profiles of other instruments such as GOES ABI. See Table 1 for mapping each band to one of the 14 indices and the central wavelength for each band.
286
+
287
+ ## Conversion to TOA Reflectance and Brightness Temperature
288
+
289
+ After generating daily composites, digital numbers (DNs) from the MODIS bands are converted into Top-of-Atmosphere (TOA) reflectance for visible and Near-Infrared (NIR) bands, and brightness temperature (BT) for Thermal Infrared (TIR) bands. The conversion is guided by the MODIS Level 1B product user guide and implemented through the `SatPy` Python package. These transformations give the data physical units (reflectance and temperature). 
290
+
291
+ ## Expected Model Input
292
+
293
+ The pre-processed data should closely match the bands listed in the table provided in the model documentation, ensuring that each input channel accurately corresponds to a specific MODIS band and spectral range. The exact bands required depend on the task; however, the general expectation is for consistency with the MODIS TOA reflectance and BT band specifications.
294
+
295
+ ## Equations for MODIS DN Conversion
296
+
297
+ Radiance and reflectance scales and offsets are found in the MOD021KM metadata, specifically within each subdataset.
298
+
299
+ Radiance: `radianceScales` and `radianceOffsets`
300
+
301
+ Reflectance: `reflectanceScales` and `reflectanceOffsets`
302
+
303
+ ### Reflectance Calibration
304
+ The formula for converting MODIS DN values to TOA reflectance is:
305
+
306
+ $$\text{Reflectance} = (DN - \text{reflectanceOffsets}) \times \text{reflectanceScales} \times 100$$
307
+
308
+ This formula scales and converts the values into percentage reflectance.
309
+
310
+ ### Brightness Temperature Calibration
311
+
312
+ For TIR bands, the calibration to Brightness Temperature ($BT$) is more involved and relies on physical constants and the effective central wavenumber ($WN$).
313
+
314
+ The equation for converting MODIS DN to BT is:
315
+
316
+ $$\text{Radiance} = (DN - \text{radianceOffsets}) \times \text{radianceScales}$$
317
+
318
+
319
+ $$BT = \frac{c_2}{\text{WN} \times \ln\left(\frac{c_1}{\text{Radiance} \times \text{WN}^5} + 1\right)}$$
320
+
321
+
322
+ $$BT = \frac{(BT - tci)}{tcs}$$
323
+
324
+ Where: 
325
+
326
+ - $c_1$ and $c_2$ are derived constants based on the Planck constant $h$, the speed of light $c$, and the Boltzmann constant $k$.
327
+ - $tcs$ is the temperature correction slope, and $tci$ is the temperature correction intercept.
328
+
329
+ ### Scaling for Machine Learning Compatibility
330
+
331
+ Both TOA reflectance and BT values are scaled to a range of 0-1 to ensure compatibility with neural networks, aiding model convergence and training stability: 
332
+
333
+  **TOA Reflectance Scaling**
334
+
335
+ Reflectance values are scaled by a factor of 0.01, transforming the original 0-100 range to 0-1. 
336
+
337
+ $$\text{TOA Reflectance (scaled)} = \text{TOA Reflectance} \times 0.01$$
338
+
339
+ **Brightness Temperature Scaling**
340
+
341
+ Brightness temperatures are min-max scaled to a range of 0-1, based on global minimum and maximum values for each of the 8 TIR channels in the dataset.
342
+
343
+ $$\text{Scaled Value} = \frac{\text{Original Value} - \text{Min}}{\text{Max} - \text{Min}}$$
344
+
345
+ This normalization process aligns the dynamic range of both feature types, contributing to more stable model performance. 
346
+
347
+ ## Example Python Code
348
+
349
+ ### MODIS L1B
350
+
351
+ - https://github.com/pytroll/satpy/blob/main/satpy/readers/modis_l1b.py
352
+
353
+ Below is an example of the Python code used in SatPy for calibrating radiance, reflectance, and BT for MODIS L1B products:
354
+
355
+ ```python
356
+
357
+ def calibrate_radiance(array, attributes, index):
358
+ """Calibration for radiance channels."""
359
+ offset = np.float32(attributes["radiance_offsets"][index])
360
+ scale = np.float32(attributes["radiance_scales"][index])
361
+ array = (array - offset) * scale
362
+ return array
363
+
364
+
365
+ def calibrate_refl(array, attributes, index):
366
+ """Calibration for reflective channels."""
367
+ offset = np.float32(attributes["reflectance_offsets"][index])
368
+ scale = np.float32(attributes["reflectance_scales"][index])
369
+ # convert to reflectance and convert from 1 to %
370
+ array = (array - offset) * scale * 100
371
+ return array
372
+
373
+
374
+ def calibrate_bt(array, attributes, index, band_name):
375
+ """Calibration for the emissive channels."""
376
+ offset = np.float32(attributes["radiance_offsets"][index])
377
+ scale = np.float32(attributes["radiance_scales"][index])
378
+
379
+ array = (array - offset) * scale
380
+
381
+ # Planck constant (Joule second)
382
+ h__ = np.float32(6.6260755e-34)
383
+
384
+ # Speed of light in vacuum (meters per second)
385
+ c__ = np.float32(2.9979246e+8)
386
+
387
+ # Boltzmann constant (Joules per Kelvin)
388
+ k__ = np.float32(1.380658e-23)
389
+
390
+ # Derived constants
391
+ c_1 = 2 * h__ * c__ * c__
392
+ c_2 = (h__ * c__) / k__
393
+
394
+ # Effective central wavenumber (inverse centimeters)
395
+ cwn = np.array([
396
+ 2.641775E+3, 2.505277E+3, 2.518028E+3, 2.465428E+3,
397
+ 2.235815E+3, 2.200346E+3, 1.477967E+3, 1.362737E+3,
398
+ 1.173190E+3, 1.027715E+3, 9.080884E+2, 8.315399E+2,
399
+ 7.483394E+2, 7.308963E+2, 7.188681E+2, 7.045367E+2],
400
+ dtype=np.float32)
401
+
402
+ # Temperature correction slope (no units)
403
+ tcs = np.array([
404
+ 9.993411E-1, 9.998646E-1, 9.998584E-1, 9.998682E-1,
405
+ 9.998819E-1, 9.998845E-1, 9.994877E-1, 9.994918E-1,
406
+ 9.995495E-1, 9.997398E-1, 9.995608E-1, 9.997256E-1,
407
+ 9.999160E-1, 9.999167E-1, 9.999191E-1, 9.999281E-1],
408
+ dtype=np.float32)
409
+
410
+ # Temperature correction intercept (Kelvin)
411
+ tci = np.array([
412
+ 4.770532E-1, 9.262664E-2, 9.757996E-2, 8.929242E-2,
413
+ 7.310901E-2, 7.060415E-2, 2.204921E-1, 2.046087E-1,
414
+ 1.599191E-1, 8.253401E-2, 1.302699E-1, 7.181833E-2,
415
+ 1.972608E-2, 1.913568E-2, 1.817817E-2, 1.583042E-2],
416
+ dtype=np.float32)
417
+
418
+ # Transfer wavenumber [cm^(-1)] to wavelength [m]
419
+ cwn = 1. / (cwn * 100)
420
+
421
+ # Some versions of the modis files do not contain all the bands.
422
+ emmissive_channels = ["20", "21", "22", "23", "24", "25", "27", "28", "29",
423
+ "30", "31", "32", "33", "34", "35", "36"]
424
+ global_index = emmissive_channels.index(band_name)
425
+
426
+ cwn = cwn[global_index]
427
+ tcs = tcs[global_index]
428
+ tci = tci[global_index]
429
+ array = c_2 / (cwn * np.log(c_1 / (1000000 * array * cwn ** 5) + 1))
430
+ array = (array - tci) / tcs
431
+ return array
432
+ ```
433
+
434
+ ### ABI L1B
435
+
436
+ - https://github.com/pytroll/satpy/blob/main/satpy/readers/abi_l1b.py
437
+
438
+ Below is an example of the Python code used in SatPy for calibrating radiance, reflectance, and BT for ABI L1B products:
439
+
440
+ ```python
441
+ def _rad_calibrate(self, data):
442
+ """Calibrate any channel to radiances.
443
+
444
+ This no-op method is just to keep the flow consistent -
445
+ each valid cal type results in a calibration method call
446
+ """
447
+ res = data
448
+ res.attrs = data.attrs
449
+ return res
450
+
451
+ def _raw_calibrate(self, data):
452
+ """Calibrate any channel to raw counts.
453
+
454
+ Useful for cases where a copy requires no calibration.
455
+ """
456
+ res = data
457
+ res.attrs = data.attrs
458
+ res.attrs["units"] = "1"
459
+ res.attrs["long_name"] = "Raw Counts"
460
+ res.attrs["standard_name"] = "counts"
461
+ return res
462
+
463
+ def _vis_calibrate(self, data):
464
+ """Calibrate visible channels to reflectance."""
465
+ solar_irradiance = self["esun"]
466
+ esd = self["earth_sun_distance_anomaly_in_AU"]
467
+
468
+ factor = np.pi * esd * esd / solar_irradiance
469
+
470
+ res = data * np.float32(factor)
471
+ res.attrs = data.attrs
472
+ res.attrs["units"] = "1"
473
+ res.attrs["long_name"] = "Bidirectional Reflectance"
474
+ res.attrs["standard_name"] = "toa_bidirectional_reflectance"
475
+ return res
476
+
477
+ def _get_minimum_radiance(self, data):
478
+ """Estimate minimum radiance from Rad DataArray."""
479
+ attrs = data.attrs
480
+ scale_factor = attrs["scale_factor"]
481
+ add_offset = attrs["add_offset"]
482
+ count_zero_rad = - add_offset / scale_factor
483
+ count_pos = np.ceil(count_zero_rad)
484
+ min_rad = count_pos * scale_factor + add_offset
485
+ return min_rad
486
+
487
+ def _ir_calibrate(self, data):
488
+ """Calibrate IR channels to BT."""
489
+ fk1 = float(self["planck_fk1"])
490
+ fk2 = float(self["planck_fk2"])
491
+ bc1 = float(self["planck_bc1"])
492
+ bc2 = float(self["planck_bc2"])
493
+
494
+ if self.clip_negative_radiances:
495
+ min_rad = self._get_minimum_radiance(data)
496
+ data = data.clip(min=data.dtype.type(min_rad))
497
+
498
+ res = (fk2 / np.log(fk1 / data + 1) - bc1) / bc2
499
+ res.attrs = data.attrs
500
+ res.attrs["units"] = "K"
501
+ res.attrs["long_name"] = "Brightness Temperature"
502
+ res.attrs["standard_name"] = "toa_brightness_temperature"
503
+ return res
504
+ ```
505
+
506
+ ### Performing scaling as a torch transform
507
+
508
+ For MODIS-TOA data:
509
+
510
+ ```python
511
+ import numpy as np
512
+
513
+
514
+ # -----------------------------------------------------------------------
515
+ # MinMaxEmissiveScaleReflectance
516
+ # -----------------------------------------------------------------------
517
+ class MinMaxEmissiveScaleReflectance(object):
518
+ """
519
+ Performs scaling of MODIS TOA data
520
+ - Scales reflectance percentages to reflectance units (% -> (0,1))
521
+ - Performs per-channel minmax scaling for emissive bands (k -> (0,1))
522
+ """
523
+
524
+ def __init__(self):
525
+
526
+ self.reflectance_indices = [0, 1, 2, 3, 4, 6]
527
+ self.emissive_indices = [5, 7, 8, 9, 10, 11, 12, 13]
528
+
529
+ self.emissive_mins = np.array(
530
+ [223.1222, 178.9174, 204.3739, 204.7677,
531
+ 194.8686, 202.1759, 201.3823, 203.3537],
532
+ dtype=np.float32)
533
+
534
+ self.emissive_maxs = np.array(
535
+ [352.7182, 261.2920, 282.5529, 319.0373,
536
+ 295.0209, 324.0677, 321.5254, 285.9848],
537
+ dtype=np.float32)
538
+
539
+ def __call__(self, img):
540
+
541
+ # Reflectance % to reflectance units
542
+ img[:, :, self.reflectance_indices] = \
543
+ img[:, :, self.reflectance_indices] * 0.01
544
+
545
+ # Brightness temp scaled to (0,1) range
546
+ img[:, :, self.emissive_indices] = \
547
+ (img[:, :, self.emissive_indices] - self.emissive_mins) / \
548
+ (self.emissive_maxs - self.emissive_mins)
549
+
550
+ return img
551
+
552
+
553
+ # ------------------------------------------------------------------------
554
+ # ModisToaTransform
555
+ # ------------------------------------------------------------------------
556
+ class ModisToaTransform:
557
+ """
558
+ torchvision transform which transforms the input imagery
559
+ """
560
+
561
+ def __init__(self, config):
562
+
563
+ self.transform_img = \
564
+ T.Compose([
565
+ MinMaxEmissiveScaleReflectance(),
566
+ T.ToTensor(),
567
+ T.Resize((config.DATA.IMG_SIZE, config.DATA.IMG_SIZE)),
568
+ ])
569
+
570
+ def __call__(self, img):
571
+
572
+ img = self.transform_img(img)
573
+
574
+ return img
575
+ ```
576
+
577
+ For ABI data
578
+
579
+ ```python
580
+ # -----------------------------------------------------------------------
581
+ # ConvertABIToReflectanceBT
582
+ # -----------------------------------------------------------------------
583
+ class ConvertABIToReflectanceBT(object):
584
+ """
585
+ Performs scaling of MODIS TOA data
586
+ - Scales reflectance percentages to reflectance units (% -> (0,1))
587
+ - Performs per-channel minmax scaling for emissive bands (k -> (0,1))
588
+ """
589
+
590
+ def __init__(self):
591
+
592
+ self.reflectance_indices = [0, 1, 2, 3, 4, 6]
593
+ self.emissive_indices = [5, 7, 8, 9, 10, 11, 12, 13]
594
+
595
+ def __call__(self, img):
596
+
597
+ # Digital Numbers to TOA reflectance units
598
+ img[:, :, self.reflectance_indices] = \
599
+ vis_calibrate(img[:, :, self.reflectance_indices])
600
+
601
+ # Digital Numbers -> Radiance -> Brightness Temp (K)
602
+ img[:, :, self.emissive_indices] = ir_calibrate(img[:, :, self.emissive_indices])
603
+
604
+ return img
605
+
606
+ # ------------------------------------------------------------------------
607
+ # MinMaxEmissiveScaleReflectance
608
+ # ------------------------------------------------------------------------
609
+ class MinMaxEmissiveScaleReflectance(object):
610
+ """
611
+ Performs scaling of MODIS TOA data
612
+ - Scales reflectance percentages to reflectance units (% -> (0,1))
613
+ - Performs per-channel minmax scaling for emissive bands (k -> (0,1))
614
+ """
615
+
616
+ def __init__(self):
617
+
618
+ self.reflectance_indices = [0, 1, 2, 3, 4, 6]
619
+ self.emissive_indices = [5, 7, 8, 9, 10, 11, 12, 13]
620
+
621
+ self.emissive_mins = np.array(
622
+ [117.04327, 152.00592, 157.96591, 176.15349,
623
+ 210.60493, 210.52264, 218.10147, 225.9894],
624
+ dtype=np.float32)
625
+
626
+ self.emissive_maxs = np.array(
627
+ [221.07022, 224.44113, 242.3326, 307.42004,
628
+ 290.8879, 343.72617, 345.72894, 323.5239],
629
+ dtype=np.float32)
630
+
631
+ def __call__(self, img):
632
+
633
+ # Reflectance % to reflectance units
634
+ img[:, :, self.reflectance_indices] = \
635
+ img[:, :, self.reflectance_indices] * 0.01
636
+
637
+ # Brightness temp scaled to (0,1) range
638
+ img[:, :, self.emissive_indices] = \
639
+ (img[:, :, self.emissive_indices] - self.emissive_mins) / \
640
+ (self.emissive_maxs - self.emissive_mins)
641
+
642
+ return img
643
+
644
+ # ------------------------------------------------------------------------
645
+ # AbiToaTransform
646
+ # ------------------------------------------------------------------------
647
+ class AbiToaTransform:
648
+ """
649
+ torchvision transform which transforms the input imagery into
650
+ addition to generating a MiM mask
651
+ """
652
+
653
+ def __init__(self, img_size):
654
+
655
+ self.transform_img = \
656
+ T.Compose([
657
+ ConvertABIToReflectanceBT(),
658
+ MinMaxEmissiveScaleReflectance(),
659
+ T.ToTensor(),
660
+ T.Resize((img_size, img_size)),
661
+ ])
662
+
663
+ def __call__(self, img):
664
+
665
+ img = self.transform_img(img)
666
+
667
+ return img
668
+
669
+ ```