bert-uncased-finetuned-csv-qa

This model is a fine-tuned version of bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1688

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 2 5.4411
No log 2.0 4 4.5683
No log 3.0 6 3.9019
No log 4.0 8 3.3646
No log 5.0 10 2.8095
No log 6.0 12 2.3153
No log 7.0 14 1.9456
No log 8.0 16 1.6904
No log 9.0 18 1.4633
No log 10.0 20 1.2169
No log 11.0 22 0.9963
No log 12.0 24 0.8378
No log 13.0 26 0.7068
No log 14.0 28 0.6008
No log 15.0 30 0.5232
No log 16.0 32 0.4659
No log 17.0 34 0.4140
No log 18.0 36 0.3740
No log 19.0 38 0.3459
No log 20.0 40 0.3142
No log 21.0 42 0.2912
No log 22.0 44 0.2766
No log 23.0 46 0.2710
No log 24.0 48 0.2711
No log 25.0 50 0.2730
No log 26.0 52 0.2645
No log 27.0 54 0.2507
No log 28.0 56 0.2432
No log 29.0 58 0.2373
No log 30.0 60 0.2405
No log 31.0 62 0.2397
No log 32.0 64 0.2352
No log 33.0 66 0.2298
No log 34.0 68 0.2255
No log 35.0 70 0.2191
No log 36.0 72 0.2129
No log 37.0 74 0.2068
No log 38.0 76 0.2039
No log 39.0 78 0.2036
No log 40.0 80 0.2004
No log 41.0 82 0.2030
No log 42.0 84 0.2003
No log 43.0 86 0.1958
No log 44.0 88 0.1915
No log 45.0 90 0.1868
No log 46.0 92 0.1812
No log 47.0 94 0.1769
No log 48.0 96 0.1729
No log 49.0 98 0.1698
No log 50.0 100 0.1688

Framework versions

  • Transformers 4.46.3
  • Pytorch 2.5.1+cpu
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
6
Safetensors
Model size
109M params
Tensor type
F32
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for SzymonKozl/bert-uncased-finetuned-csv-qa

Finetuned
(3388)
this model