swinv2-tiny-patch4-window8-256-DMAE-da-colab2

This model is a fine-tuned version of microsoft/swinv2-tiny-patch4-window8-256 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8835
  • Accuracy: 0.7609

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1.5e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 40

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.357 0.9565 11 1.3906 0.3913
1.2964 2.0 23 1.2819 0.4348
1.1609 2.9565 34 1.1804 0.4783
1.0747 4.0 46 1.0911 0.6087
1.027 4.9565 57 1.0176 0.6304
0.8985 6.0 69 0.8963 0.6739
0.8031 6.9565 80 0.9867 0.6739
0.7744 8.0 92 0.8710 0.6522
0.7488 8.9565 103 0.8845 0.6957
0.6767 10.0 115 0.8693 0.6957
0.6082 10.9565 126 0.8133 0.6739
0.6354 12.0 138 0.8771 0.6739
0.6422 12.9565 149 0.8137 0.7174
0.584 14.0 161 0.8861 0.6522
0.5763 14.9565 172 0.8459 0.7391
0.5238 16.0 184 0.8590 0.7174
0.528 16.9565 195 0.8705 0.7174
0.5626 18.0 207 0.8636 0.7174
0.5395 18.9565 218 0.8794 0.6957
0.4696 20.0 230 0.8835 0.7609
0.488 20.9565 241 0.8889 0.7391
0.4764 22.0 253 0.9109 0.7174
0.4668 22.9565 264 0.8893 0.7391
0.4676 24.0 276 0.9082 0.6957
0.4619 24.9565 287 0.9353 0.7174
0.4727 26.0 299 0.9331 0.7174
0.4461 26.9565 310 0.8937 0.7391
0.428 28.0 322 0.9175 0.7174
0.4694 28.9565 333 0.9340 0.6957
0.3812 30.0 345 0.9722 0.6739
0.4252 30.9565 356 0.9433 0.7174
0.3883 32.0 368 0.9420 0.7391
0.4228 32.9565 379 0.9483 0.6739
0.4288 34.0 391 0.9529 0.7174
0.3982 34.9565 402 0.9506 0.7174
0.3935 36.0 414 0.9539 0.6739
0.3974 36.9565 425 0.9599 0.6957
0.3893 38.0 437 0.9608 0.6957
0.4201 38.2609 440 0.9608 0.6957

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.5.1+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
13
Safetensors
Model size
27.6M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Augusto777/swinv2-tiny-patch4-window8-256-DMAE-da-colab2

Finetuned
(129)
this model

Evaluation results