File size: 5,838 Bytes
3136a74 483f40f 3136a74 483f40f 3136a74 483f40f 3136a74 483f40f 3136a74 483f40f 3136a74 75b5ef6 483f40f 75b5ef6 483f40f 75b5ef6 17445cd 2df685b 17445cd 2df685b 17445cd 2df685b 17445cd 2df685b 17445cd 2df685b 17445cd 2df685b 17445cd 2df685b 17445cd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 |
---
license: cc-by-4.0
pipeline_tag: image-segmentation
---
# Model Card
This Hugging Face repository contains models trained in the article **"Optimizing Methane Detection On Board Satellites: Speed, Accuracy, and Low-Power Solutions for Resource-Constrained Hardware."**
**Paper:** [Optimizing Methane Detection On Board Satellites: Speed, Accuracy, and Low-Power Solutions for Resource-Constrained Hardware](https://huggingface.co/papers/2507.01472)
## Model Overview
The models here were trained using the code available at the following GitHub repository:
* **Training Code**: [HyperspectralViTs](https://github.com/previtus/HyperspectralViTs)
The main project code, including filters benchmarking and demos, is available at the:
* **Project Code**: [Methane Filters Benchmark](https://github.com/zaitra/methane-filters-benchmark)
## Data and Products
The precomputed products used for training were created by code in the main project repository:
* **Products Creation Code**: [methane-filters-benchmark](https://github.com/zaitra/methane-filters-benchmark)
Additionally, these precomputed products are hosted and accessible here:
* **Dataset Repository**: [STARCOP-fast-products](https://huggingface.co/datasets/onboard-coop/STARCOP-fast-products)
## Sample Usage
You can try out our models and demos directly in Google Colab using the provided notebooks:
* **Models Demo**: [](https://colab.research.google.com/github/zaitra/methane-filters-benchmark/blob/main/ntbs/Models_demo.ipynb)
This notebook demonstrates model inference.
* **Products Creation and Benchmarking Demo**: [](https://colab.research.google.com/github/zaitra/methane-filters-benchmark/blob/main/ntbs/Products_demo.ipynb)
This notebook demonstrates generating products and measuring their runtime.
For local inference using the ONNX models, refer to the `benchmark/onnx_inference_time.py` script in the [Project Code repository](https://github.com/zaitra/methane-filters-benchmark).
## Citation
If you use these models in your research, please cite our article:
```bibtex
@misc{herec2025optimizingmethanedetectionboard,
title={Optimizing Methane Detection On Board Satellites: Speed, Accuracy, and Low-Power Solutions for Resource-Constrained Hardware},
author={Jonáš Herec and Vít Růžička and Rado Pitoňák},
year={2025},
eprint={2507.01472},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2507.01472},
}
```
## Models performance
## U-Net - CEM
| ID | Recall | Precision | F1 | F1 strong |
|----|--------|-----------|-------|-----------|
| A | 0.441 | 0.317 | 0.369 | 0.500 |
| B | 0.701 | 0.158 | 0.258 | 0.550 |
| C | 0.531 | 0.299 | 0.382 | 0.610 |
| D | 0.536 | 0.218 | 0.310 | 0.551 |
| E | 0.564 | 0.182 | 0.275 | 0.469 |
| **AVG** | 55.47% | 23.49% | 31.90% | 53.55% |
| **STD** | 8.41% | 6.30% | 4.94% | 4.85% |
---
## U-Net - ACE
| ID | Recall | Precision | F1 | F1 strong |
|----|--------|-----------|-------|-----------|
| A | 0.468 | 0.202 | 0.282 | 0.460 |
| B | 0.480 | 0.288 | 0.360 | 0.537 |
| C | 0.413 | 0.253 | 0.314 | 0.461 |
| D | 0.550 | 0.194 | 0.287 | 0.510 |
| E | 0.500 | 0.162 | 0.245 | 0.442 |
| **AVG** | 48.22% | 21.99% | 29.77% | 48.19% |
| **STD** | 4.46% | 4.50% | 3.82% | 3.57% |
---
## U-Net - MF
| ID | Recall | Precision | F1 | F1 strong |
|----|--------|-----------|-------|-----------|
| A | 0.603 | 0.153 | 0.243 | 0.451 |
| B | 0.673 | 0.198 | 0.306 | 0.585 |
| C | 0.563 | 0.259 | 0.355 | 0.507 |
| D | 0.625 | 0.173 | 0.271 | 0.558 |
| E | 0.466 | 0.301 | 0.366 | 0.496 |
| **AVG** | 58.60% | 21.68% | 30.82% | 51.94% |
| **STD** | 6.97% | 5.52% | 4.73% | 4.73% |
---
## U-Net - MAG1C-SAS
| ID | Recall | Precision | F1 | F1 strong |
|----|--------|-----------|-------|-----------|
| A | 0.587 | 0.456 | 0.513 | 0.668 |
| B | 0.618 | 0.291 | 0.395 | 0.642 |
| C | 0.576 | 0.290 | 0.386 | 0.604 |
| D | 0.613 | 0.414 | 0.495 | 0.686 |
| E | 0.427 | 0.280 | 0.338 | 0.470 |
| **AVG** | 56.42% | 34.62% | 42.54% | 61.40% |
| **STD** | 7.04% | 7.38% | 6.73% | 7.71% |
---
## U-Net - MAG1C (tile-wise)
| ID | Recall | Precision | F1 | F1 strong |
|----|--------|-----------|-------|-----------|
| A | 0.643 | 0.218 | 0.325 | 0.599 |
| B | 0.732 | 0.288 | 0.413 | 0.692 |
| C | 0.613 | 0.362 | 0.455 | 0.659 |
| D | 0.669 | 0.242 | 0.355 | 0.633 |
| E | 0.640 | 0.366 | 0.466 | 0.684 |
| **AVG** | 65.94% | 29.52% | 40.28% | 65.34% |
| **STD** | 4.04% | 6.05% | 5.51% | 3.42% |
---
## LinkNet - CEM
| ID | Recall | Precision | F1 | F1 strong |
|----|--------|-----------|-------|-----------|
| A | 0.597 | 0.319 | 0.416 | 0.633 |
| B | 0.539 | 0.274 | 0.363 | 0.603 |
| C | 0.452 | 0.233 | 0.308 | 0.527 |
| D | 0.606 | 0.165 | 0.260 | 0.561 |
| E | 0.442 | 0.144 | 0.217 | 0.455 |
| **AVG** | 52.72% | 22.70% | 31.27% | 55.56% |
| **STD** | 6.96% | 6.54% | 7.09% | 6.20% |
---
## LinkNet - MAG1C-SAS
| ID | Recall | Precision | F1 | F1 strong |
|----|--------|-----------|-------|-----------|
| A | 0.566 | 0.324 | 0.412 | 0.612 |
| B | 0.505 | 0.515 | 0.510 | 0.613 |
| C | 0.381 | 0.422 | 0.400 | 0.507 |
| D | 0.590 | 0.383 | 0.464 | 0.660 |
| E | 0.513 | 0.378 | 0.435 | 0.627 |
| **AVG** | 51.10% | 40.44% | 44.42% | 60.38% |
| **STD** | 7.24% | 6.35% | 3.95% | 5.14% | |