Version_Test_ASAP_FineTuningBERT_AugV14_k5_task1_organization_k5_k5_fold2

This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6278
  • Qwk: 0.6047
  • Mse: 0.6275
  • Rmse: 0.7921

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 1.0 4 6.9608 0.0 6.9610 2.6384
No log 2.0 8 4.1239 0.0 4.1242 2.0308
No log 3.0 12 2.4639 0.0451 2.4643 1.5698
No log 4.0 16 1.4713 0.0107 1.4719 1.2132
No log 5.0 20 0.9159 0.0 0.9164 0.9573
No log 6.0 24 0.8217 0.0592 0.8222 0.9067
No log 7.0 28 0.7869 0.1305 0.7870 0.8871
No log 8.0 32 0.5769 0.4875 0.5765 0.7593
No log 9.0 36 1.2039 0.3855 1.2034 1.0970
No log 10.0 40 0.7219 0.5504 0.7213 0.8493
No log 11.0 44 0.5855 0.5053 0.5851 0.7649
No log 12.0 48 0.5793 0.6124 0.5789 0.7608
No log 13.0 52 0.6014 0.5063 0.6011 0.7753
No log 14.0 56 0.5462 0.6095 0.5457 0.7387
No log 15.0 60 0.5715 0.6156 0.5710 0.7556
No log 16.0 64 0.6117 0.6076 0.6113 0.7818
No log 17.0 68 0.6316 0.5872 0.6312 0.7945
No log 18.0 72 0.6118 0.5986 0.6114 0.7819
No log 19.0 76 0.5845 0.6070 0.5842 0.7643
No log 20.0 80 0.6532 0.6212 0.6526 0.8078
No log 21.0 84 0.5561 0.6170 0.5557 0.7455
No log 22.0 88 0.7184 0.6027 0.7178 0.8472
No log 23.0 92 0.5178 0.6124 0.5175 0.7194
No log 24.0 96 0.7115 0.5447 0.7110 0.8432
No log 25.0 100 0.5995 0.6070 0.5991 0.7740
No log 26.0 104 0.5932 0.6179 0.5929 0.7700
No log 27.0 108 0.7895 0.5480 0.7891 0.8883
No log 28.0 112 0.8963 0.5325 0.8958 0.9465
No log 29.0 116 0.6200 0.6002 0.6196 0.7872
No log 30.0 120 0.9308 0.5617 0.9301 0.9644
No log 31.0 124 0.5955 0.6092 0.5951 0.7714
No log 32.0 128 0.6985 0.5962 0.6980 0.8355
No log 33.0 132 0.7799 0.5605 0.7794 0.8828
No log 34.0 136 0.7241 0.5720 0.7237 0.8507
No log 35.0 140 0.9519 0.5046 0.9514 0.9754
No log 36.0 144 0.6803 0.5748 0.6800 0.8246
No log 37.0 148 0.7750 0.5386 0.7747 0.8802
No log 38.0 152 0.8747 0.5154 0.8743 0.9350
No log 39.0 156 0.6594 0.5456 0.6592 0.8119
No log 40.0 160 0.6278 0.6047 0.6275 0.7921

Framework versions

  • Transformers 4.47.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
8
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for genki10/Version_Test_ASAP_FineTuningBERT_AugV14_k5_task1_organization_k5_k5_fold2

Finetuned
(5723)
this model