bert-bias-classifier_teacher

This model is a fine-tuned version of google-bert/bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0000
  • Accuracy: 1.0
  • Auc: 1.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 20
  • eval_batch_size: 20
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 80

Training results

Training Loss Epoch Step Validation Loss Accuracy Auc
0.0312 1.0 1600 0.0007 1.0 1.0
0.0093 2.0 3200 0.0023 0.999 1.0
0.0088 3.0 4800 0.0010 1.0 1.0
0.0061 4.0 6400 0.0002 1.0 1.0
0.0075 5.0 8000 0.0000 1.0 1.0
0.0052 6.0 9600 0.0001 1.0 1.0
0.0059 7.0 11200 0.0055 0.998 1.0
0.0048 8.0 12800 0.0048 0.998 1.0
0.0034 9.0 14400 0.0000 1.0 1.0
0.004 10.0 16000 0.0001 1.0 1.0
0.0043 11.0 17600 0.0001 1.0 1.0
0.0037 12.0 19200 0.0000 1.0 1.0
0.0032 13.0 20800 0.0000 1.0 1.0
0.0033 14.0 22400 0.0001 1.0 1.0
0.0028 15.0 24000 0.0003 1.0 1.0
0.0032 16.0 25600 0.0000 1.0 1.0
0.0021 17.0 27200 0.0000 1.0 1.0
0.002 18.0 28800 0.0001 1.0 1.0
0.0019 19.0 30400 0.0014 1.0 1.0
0.0021 20.0 32000 0.0000 1.0 1.0
0.0014 21.0 33600 0.0000 1.0 1.0
0.0018 22.0 35200 0.0001 1.0 1.0
0.0025 23.0 36800 0.0001 1.0 1.0
0.0021 24.0 38400 0.0000 1.0 1.0
0.0015 25.0 40000 0.0000 1.0 1.0
0.0018 26.0 41600 0.0000 1.0 1.0
0.0016 27.0 43200 0.0000 1.0 1.0
0.0016 28.0 44800 0.0000 1.0 1.0
0.002 29.0 46400 0.0000 1.0 1.0
0.002 30.0 48000 0.0000 1.0 1.0
0.0014 31.0 49600 0.0000 1.0 1.0
0.0021 32.0 51200 0.0000 1.0 1.0
0.0016 33.0 52800 0.0000 1.0 1.0
0.0018 34.0 54400 0.0000 1.0 1.0
0.0012 35.0 56000 0.0004 1.0 1.0
0.0014 36.0 57600 0.0000 1.0 1.0
0.0014 37.0 59200 0.0000 1.0 1.0
0.0016 38.0 60800 0.0002 1.0 1.0
0.0015 39.0 62400 0.0000 1.0 1.0
0.0008 40.0 64000 0.0000 1.0 1.0
0.0008 41.0 65600 0.0001 1.0 1.0
0.0011 42.0 67200 0.0000 1.0 1.0
0.0011 43.0 68800 0.0000 1.0 1.0
0.0015 44.0 70400 0.0000 1.0 1.0
0.0011 45.0 72000 0.0000 1.0 1.0
0.0009 46.0 73600 0.0000 1.0 1.0
0.0006 47.0 75200 0.0000 1.0 1.0
0.0011 48.0 76800 0.0000 1.0 1.0
0.0015 49.0 78400 0.0001 1.0 1.0
0.0009 50.0 80000 0.0000 1.0 1.0
0.0012 51.0 81600 0.0000 1.0 1.0
0.0005 52.0 83200 0.0001 1.0 1.0
0.0008 53.0 84800 0.0000 1.0 1.0
0.0009 54.0 86400 0.0010 1.0 1.0
0.0006 55.0 88000 0.0002 1.0 1.0
0.0008 56.0 89600 0.0000 1.0 1.0
0.0008 57.0 91200 0.0000 1.0 1.0
0.0009 58.0 92800 0.0000 1.0 1.0
0.0005 59.0 94400 0.0000 1.0 1.0
0.0008 60.0 96000 0.0000 1.0 1.0
0.0004 61.0 97600 0.0000 1.0 1.0
0.0006 62.0 99200 0.0000 1.0 1.0
0.0005 63.0 100800 0.0000 1.0 1.0
0.0007 64.0 102400 0.0000 1.0 1.0
0.0005 65.0 104000 0.0000 1.0 1.0
0.0009 66.0 105600 0.0000 1.0 1.0
0.0009 67.0 107200 0.0000 1.0 1.0
0.0006 68.0 108800 0.0000 1.0 1.0
0.0003 69.0 110400 0.0000 1.0 1.0
0.0008 70.0 112000 0.0000 1.0 1.0
0.0005 71.0 113600 0.0000 1.0 1.0
0.0003 72.0 115200 0.0000 1.0 1.0
0.0005 73.0 116800 0.0000 1.0 1.0
0.0006 74.0 118400 0.0000 1.0 1.0
0.0003 75.0 120000 0.0000 1.0 1.0
0.0008 76.0 121600 0.0000 1.0 1.0
0.0004 77.0 123200 0.0000 1.0 1.0
0.0007 78.0 124800 0.0000 1.0 1.0
0.0005 79.0 126400 0.0000 1.0 1.0
0.0005 80.0 128000 0.0000 1.0 1.0

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.7.1
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
6
Safetensors
Model size
109M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for lugghio/bert-bias-classifier_teacher

Finetuned
(5695)
this model