Update README.md
Browse files
README.md
CHANGED
@@ -58,7 +58,8 @@ This work is funded by [Code for Earth 2023](https://codeforearth.ecmwf.int/) in
|
|
58 |
- [Results](#results)
|
59 |
- [Technical Specifications](#technical-specifications-optional)
|
60 |
- [Model Architecture and Objective](#model-architecture-and-objective)
|
61 |
-
- [
|
|
|
62 |
- [Hardware](#hardware)
|
63 |
- [Software](#software)
|
64 |
- [Authors](#authors)
|
@@ -246,7 +247,7 @@ accuracy across different resolutions and representations:
|
|
246 |
By combining these loss terms, the Swin2 transformer is trained to produce accurate predictions across different resolutions and under various data transformations,
|
247 |
ensuring its versatility and robustness in diverse scenarios.
|
248 |
|
249 |
-
##
|
250 |
|
251 |
Leveraging GPUs in deep learning initiatives greatly amplifies the pace of model training and inference. This computational edge not only diminishes the total
|
252 |
computational duration but also equips us to proficiently navigate complex tasks and extensive datasets.
|
|
|
58 |
- [Results](#results)
|
59 |
- [Technical Specifications](#technical-specifications-optional)
|
60 |
- [Model Architecture and Objective](#model-architecture-and-objective)
|
61 |
+
- [Loss function](#loss-function)
|
62 |
+
- [Computing Infrastructure](#computing-infrastructure)
|
63 |
- [Hardware](#hardware)
|
64 |
- [Software](#software)
|
65 |
- [Authors](#authors)
|
|
|
247 |
By combining these loss terms, the Swin2 transformer is trained to produce accurate predictions across different resolutions and under various data transformations,
|
248 |
ensuring its versatility and robustness in diverse scenarios.
|
249 |
|
250 |
+
## Computing Infrastructure
|
251 |
|
252 |
Leveraging GPUs in deep learning initiatives greatly amplifies the pace of model training and inference. This computational edge not only diminishes the total
|
253 |
computational duration but also equips us to proficiently navigate complex tasks and extensive datasets.
|