YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Improved Wildfire Spread Prediction with Time-Series Data and the WSTS+ Benchmark

Title: Improved Wildfire Spread Prediction with Time-Series Data and the WSTS+ Benchmark
Conference: IEEE Winter Conference on Applications of Computer Vision (WACV) 2026
Paper: arXiv
Dataset: Zenodo
Model Weights: HuggingFace
Repo:: GitHub Authors: Saad Lahrichi, Jake Bova, Jesse Johnson, Jordan Malof


This repository extends the original WildfireSpreadTS benchmark with new models, improved training, and an expanded benchmark dataset, WSTS+.


Benchmark Results (AP ± Standard Deviation)

Mean Test AP for T = 1 and T = 5 Across Feature Sets

Fusion Level Model Input Days Veg Multi All # Params
Res18-UNet (Gerard et al. 2023) 1 0.328 ± 0.090 0.341 ± 0.085 0.341 ± 0.086 14.3M
Res18-UNet 1 0.455 ± 0.090 0.468 ± 0.087 0.460 ± 0.084 14.3M
Res50-Unet 1 0.457 ± 0.089 0.459 ± 0.090 0.451 ± 0.093 32.5M
SwinUnet 1 0.432 ± 0.088 0.437 ± 0.082 0.424 ± 0.090 27.2M
SegFormer 1 0.433 ± 0.080 0.436 ± 0.083 0.423 ± 0.087 27.5M
Data Res18-UNet (Gerard et al. 2023) 5 0.333 ± 0.079 0.344 ± 0.076 0.325 ± 0.108 14.4M
Res18-UNet 5 0.472 ± 0.083 0.469 ± 0.087 0.460 ± 0.084 14.4M
SwinUnet 5 0.447 ± 0.087 0.453 ± 0.083 0.435 ± 0.079 27.3M
SegFormer 5 0.439 ± 0.081 0.436 ± 0.085 0.430 ± 0.082 27.7M
Feature UTAE (Gerard et al. 2023) 5 0.372 ± 0.088 0.350 ± 0.113 0.321 ± 0.135 1.1M
UTAE 5 0.452 ± 0.082 0.459 ± 0.088 0.433 ± 0.099 1.1M
UTAE (Res18) 5 0.478 ± 0.085 0.477 ± 0.089 0.475 ± 0.091 14.6M

Datasets

WSTS+ (Extended Benchmark)

Original WSTS Dataset

Both datasets are compatible with the same preprocessing and training code in this repository.

Model Weights

We release our best T=1 and T=5 models (Res18-Unet and Res18-UTAE) as PyTorch .pth files containing the raw state_dict. They follow a consistent naming convention: fold_<foldID>_testAP<value>.pth and they are organized in folders by architecture (Res18UNet, Res18UTAE), temporal dimension (T=1 or T=5), and feature set used (Veg, Multi, or All).

Each model directory contains 12 files: one per cross-validation fold (fold_0 … fold_11). The filename includes the Test AP, allowing for easy identification of best- and worst-performing folds. Link: HuggingFace

Loading pretraind Models

We provide a utility script load_trained_model.py to allow for quickly loading the pretrained models. Example calls:

python load_trained_model.py \
    --weights_path /path/to/unet/model/fold_X_testAP0.X.pth \
    --model unet

Or for UTAE:

python load_trained_model.py \
    --weights_path /path/to/utae/model/fold_Y_testAP0.Y.pth \
    --model utae

Model Comparison Table

Model Parameters (M) FLOPs (G) Inference Time (ms) GPU Memory (MB) Model Size (MB) Training Time (hours) Test AP
ResNet18-UNet 14.4 1.8 2.5±0.0 70 55 0.4 0.455
ResNet50-UNet 32.6 3.1 5.1±0.1 375 125 1.1 0.457
SwinUnet 27.2 6.1 8.9±0.0 526 106 1.8 0.432
SegFormer-B2 27.5 3.7 12.7±0.8 865 105 2.0 0.448
UTAE 1.1 10.6 9.5±1.0 997 4 1.0 0.452

WSTS vs. WSTS+ Dataset Comparison

Dataset WSTS WSTS+ Increase (%)
Years 4 (2018–2021) 8 (2016–2023) +100%
Fire Events 607 1,005 +65.6%
Total Images 13,607 24,462 +79.8%
Active Fire Pixels 1,878,679 2,638,537 +40.4%

Citation

If you use this fork or the WSTS+ benchmark, please consider citing:

@inproceedings{
    lahrichi2026improved,
    title={Improved Wildfire Spread Prediction with Time-Series Data and the WSTS+ Benchmark},
    author={Saad Lahrichi, Jake Bova, Jesse Johnson, Jordan Malof},
    booktitle={IEEE Winter Conference on Applications of Computer Vision (WACV) 2026},
    year={2026},
    url={https://arxiv.org/abs/2502.12003}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support