hkivancoral's picture
End of training
714e936
metadata
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_10x_deit_base_sgd_0001_fold1
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8196994991652755

smids_10x_deit_base_sgd_0001_fold1

This model is a fine-tuned version of facebook/deit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4637
  • Accuracy: 0.8197

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.0699 1.0 751 1.0865 0.3372
1.0294 2.0 1502 1.0518 0.4207
0.9671 3.0 2253 1.0151 0.4992
0.8907 4.0 3004 0.9742 0.5526
0.8635 5.0 3755 0.9306 0.6043
0.8115 6.0 4506 0.8888 0.6461
0.7615 7.0 5257 0.8493 0.6694
0.7205 8.0 6008 0.8123 0.6861
0.6698 9.0 6759 0.7780 0.7078
0.6144 10.0 7510 0.7466 0.7195
0.6353 11.0 8261 0.7180 0.7295
0.5516 12.0 9012 0.6921 0.7396
0.5555 13.0 9763 0.6685 0.7446
0.5272 14.0 10514 0.6476 0.7546
0.5021 15.0 11265 0.6291 0.7596
0.4905 16.0 12016 0.6125 0.7713
0.4771 17.0 12767 0.5975 0.7696
0.4486 18.0 13518 0.5841 0.7730
0.4932 19.0 14269 0.5723 0.7713
0.4372 20.0 15020 0.5615 0.7746
0.3961 21.0 15771 0.5521 0.7780
0.455 22.0 16522 0.5434 0.7796
0.4028 23.0 17273 0.5355 0.7896
0.4361 24.0 18024 0.5285 0.7913
0.4568 25.0 18775 0.5220 0.7930
0.5129 26.0 19526 0.5160 0.7947
0.4225 27.0 20277 0.5106 0.7997
0.4409 28.0 21028 0.5056 0.7997
0.4065 29.0 21779 0.5010 0.7997
0.4201 30.0 22530 0.4969 0.8013
0.3612 31.0 23281 0.4930 0.8047
0.5003 32.0 24032 0.4895 0.8063
0.4064 33.0 24783 0.4864 0.8080
0.4 34.0 25534 0.4834 0.8097
0.4159 35.0 26285 0.4807 0.8097
0.4048 36.0 27036 0.4783 0.8130
0.4636 37.0 27787 0.4761 0.8147
0.3893 38.0 28538 0.4740 0.8147
0.3469 39.0 29289 0.4723 0.8147
0.4394 40.0 30040 0.4706 0.8164
0.4132 41.0 30791 0.4692 0.8180
0.3775 42.0 31542 0.4680 0.8197
0.3957 43.0 32293 0.4669 0.8197
0.4064 44.0 33044 0.4660 0.8197
0.4124 45.0 33795 0.4652 0.8197
0.3879 46.0 34546 0.4646 0.8197
0.3807 47.0 35297 0.4642 0.8197
0.4327 48.0 36048 0.4639 0.8197
0.3727 49.0 36799 0.4638 0.8197
0.3716 50.0 37550 0.4637 0.8197

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2