bmedeiros's picture
End of training
678fb07 verified
metadata
library_name: transformers
license: apache-2.0
base_model: facebook/vit-msn-base
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: vit-msn-base-finetuned-lf-invalidation
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.9234042553191489

vit-msn-base-finetuned-lf-invalidation

This model is a fine-tuned version of facebook/vit-msn-base on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2414
  • Accuracy: 0.9234

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 80

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.96 6 0.6512 0.6957
0.7053 1.92 12 0.6311 0.6809
0.7053 2.88 18 0.5361 0.7277
0.5163 4.0 25 0.3341 0.8681
0.3242 4.96 31 0.3167 0.8809
0.3242 5.92 37 0.3960 0.8191
0.2779 6.88 43 0.3818 0.8255
0.2348 8.0 50 0.5019 0.7362
0.2348 8.96 56 0.2944 0.8851
0.26 9.92 62 0.2414 0.9234
0.26 10.88 68 0.3664 0.8298
0.2778 12.0 75 0.2505 0.9043
0.2271 12.96 81 0.6277 0.6298
0.2271 13.92 87 0.2753 0.8745
0.2488 14.88 93 0.6249 0.6957
0.2729 16.0 100 0.5195 0.7149
0.2729 16.96 106 0.7984 0.5745
0.3261 17.92 112 0.4631 0.7723
0.3261 18.88 118 1.1010 0.5149
0.2212 20.0 125 0.2337 0.9170
0.2802 20.96 131 0.4638 0.7574
0.2802 21.92 137 0.3859 0.8362
0.2112 22.88 143 0.6708 0.6894
0.2231 24.0 150 0.3387 0.8681
0.2231 24.96 156 0.7045 0.6553
0.2037 25.92 162 0.3958 0.8277
0.2037 26.88 168 0.5082 0.7702
0.1845 28.0 175 0.5991 0.7234
0.1898 28.96 181 0.5108 0.7617
0.1898 29.92 187 0.2720 0.9085
0.2118 30.88 193 0.4936 0.7851
0.2097 32.0 200 0.3748 0.8404
0.2097 32.96 206 0.5048 0.7766
0.1704 33.92 212 0.4368 0.7957
0.1704 34.88 218 0.6959 0.6830
0.1962 36.0 225 1.0097 0.5957
0.1686 36.96 231 0.4992 0.7915
0.1686 37.92 237 0.5374 0.7574
0.1855 38.88 243 0.3710 0.8340
0.1528 40.0 250 0.3631 0.8447
0.1528 40.96 256 0.5589 0.7681
0.1523 41.92 262 0.5147 0.7809
0.1523 42.88 268 0.5299 0.7638
0.1709 44.0 275 0.5937 0.7447
0.1527 44.96 281 0.5969 0.7383
0.1527 45.92 287 0.6439 0.7255
0.1397 46.88 293 0.7721 0.6723
0.1538 48.0 300 0.5768 0.7702
0.1538 48.96 306 0.5801 0.7596
0.1466 49.92 312 0.5673 0.7574
0.1466 50.88 318 0.6469 0.7085
0.1302 52.0 325 0.7276 0.6957
0.1565 52.96 331 0.8247 0.6723
0.1565 53.92 337 0.4811 0.7979
0.1267 54.88 343 0.6373 0.7021
0.1424 56.0 350 0.7252 0.6723
0.1424 56.96 356 0.5697 0.7489
0.1053 57.92 362 0.7067 0.6957
0.1053 58.88 368 0.6577 0.7064
0.1301 60.0 375 0.5326 0.7745
0.0906 60.96 381 0.5468 0.7851
0.0906 61.92 387 0.4413 0.8277
0.0974 62.88 393 0.5479 0.7660
0.1133 64.0 400 0.7109 0.7043
0.1133 64.96 406 0.5735 0.7617
0.1189 65.92 412 0.4084 0.8298
0.1189 66.88 418 0.5716 0.7489
0.1064 68.0 425 0.5537 0.7553
0.1084 68.96 431 0.4569 0.8021
0.1084 69.92 437 0.5227 0.7617
0.1054 70.88 443 0.5995 0.7277
0.1005 72.0 450 0.5560 0.7638
0.1005 72.96 456 0.4550 0.8064
0.1028 73.92 462 0.4404 0.8234
0.1028 74.88 468 0.4761 0.7957
0.0917 76.0 475 0.5278 0.7681
0.1009 76.8 480 0.5346 0.7617

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.19.1