hkivancoral's picture
End of training
b22e3e6
metadata
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_3x_deit_base_rms_001_fold3
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.76

smids_3x_deit_base_rms_001_fold3

This model is a fine-tuned version of facebook/deit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5835
  • Accuracy: 0.76

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.9577 1.0 225 0.9911 0.4833
0.8945 2.0 450 0.8723 0.515
0.9126 3.0 675 0.9098 0.5583
0.864 4.0 900 0.9590 0.4883
0.7818 5.0 1125 0.8052 0.61
0.8173 6.0 1350 0.8349 0.57
0.8234 7.0 1575 0.8296 0.5633
0.8025 8.0 1800 0.7926 0.6433
0.794 9.0 2025 0.7671 0.6367
0.7364 10.0 2250 0.7623 0.6917
0.7539 11.0 2475 0.7462 0.6567
0.7493 12.0 2700 0.7599 0.6483
0.8066 13.0 2925 0.7984 0.6233
0.7644 14.0 3150 0.7284 0.6767
0.6406 15.0 3375 0.8986 0.6117
0.7539 16.0 3600 0.7380 0.6417
0.7079 17.0 3825 0.7519 0.6483
0.7112 18.0 4050 0.7274 0.6683
0.709 19.0 4275 0.7182 0.6783
0.6627 20.0 4500 0.6933 0.6917
0.62 21.0 4725 0.7192 0.6783
0.6351 22.0 4950 0.6854 0.6967
0.6169 23.0 5175 0.6958 0.6917
0.6173 24.0 5400 0.6916 0.6867
0.6807 25.0 5625 0.6783 0.705
0.6099 26.0 5850 0.6681 0.705
0.5604 27.0 6075 0.7149 0.6767
0.6004 28.0 6300 0.7253 0.6667
0.6392 29.0 6525 0.6891 0.66
0.5659 30.0 6750 0.6273 0.7267
0.5546 31.0 6975 0.6350 0.7317
0.5835 32.0 7200 0.6529 0.6983
0.6237 33.0 7425 0.6048 0.7233
0.5674 34.0 7650 0.6396 0.7167
0.5405 35.0 7875 0.6074 0.7183
0.5745 36.0 8100 0.5947 0.7317
0.5811 37.0 8325 0.5820 0.7383
0.5642 38.0 8550 0.5685 0.7433
0.5332 39.0 8775 0.5891 0.745
0.5278 40.0 9000 0.5919 0.7283
0.5007 41.0 9225 0.5742 0.7567
0.5377 42.0 9450 0.5885 0.76
0.4913 43.0 9675 0.5649 0.755
0.5315 44.0 9900 0.5703 0.74
0.4857 45.0 10125 0.5619 0.765
0.4747 46.0 10350 0.5832 0.7533
0.5553 47.0 10575 0.5734 0.755
0.452 48.0 10800 0.5866 0.7617
0.4761 49.0 11025 0.5792 0.7567
0.4755 50.0 11250 0.5835 0.76

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2