Augusto777's picture
End of training
7757d98 verified
metadata
library_name: transformers
license: apache-2.0
base_model: microsoft/swinv2-tiny-patch4-window8-256
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: swinv2-tiny-patch4-window8-256-DMAE-da-colab
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.7391304347826086

swinv2-tiny-patch4-window8-256-DMAE-da-colab

This model is a fine-tuned version of microsoft/swinv2-tiny-patch4-window8-256 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9394
  • Accuracy: 0.7391

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 40

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.3823 0.9565 11 1.4058 0.1957
1.3366 2.0 23 1.4482 0.1957
1.2352 2.9565 34 1.2309 0.4565
1.1374 4.0 46 1.1031 0.6087
1.0344 4.9565 57 1.0230 0.5870
0.8772 6.0 69 0.9115 0.6522
0.7321 6.9565 80 0.8858 0.6522
0.6319 8.0 92 0.8665 0.6522
0.6438 8.9565 103 0.7738 0.7174
0.4714 10.0 115 0.8492 0.6304
0.433 10.9565 126 0.8386 0.6957
0.4793 12.0 138 0.9394 0.7391
0.4769 12.9565 149 0.9471 0.6522
0.3872 14.0 161 1.1526 0.6087
0.3906 14.9565 172 1.0575 0.6522
0.3798 16.0 184 1.0593 0.6957
0.3377 16.9565 195 1.0783 0.6087
0.3919 18.0 207 1.1067 0.6522
0.3631 18.9565 218 1.1018 0.6739
0.2762 20.0 230 1.1479 0.6522
0.2935 20.9565 241 1.1055 0.6957
0.3029 22.0 253 1.1203 0.6739
0.2857 22.9565 264 1.2820 0.6304
0.2603 24.0 276 1.2550 0.6304
0.2162 24.9565 287 1.1655 0.6739
0.2465 26.0 299 1.2511 0.6739
0.2238 26.9565 310 1.3461 0.6304
0.2271 28.0 322 1.3472 0.6304
0.2694 28.9565 333 1.4501 0.6304
0.1903 30.0 345 1.4629 0.6304
0.2054 30.9565 356 1.4672 0.6304
0.199 32.0 368 1.4725 0.6304
0.2034 32.9565 379 1.4507 0.6522
0.2048 34.0 391 1.4330 0.6304
0.1767 34.9565 402 1.4638 0.6304
0.1799 36.0 414 1.4232 0.6304
0.1903 36.9565 425 1.4508 0.6304
0.1864 38.0 437 1.4460 0.6304
0.1818 38.2609 440 1.4456 0.6304

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.5.1+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3