UIT-NO-PREroberta-base-finetuned
This model is a fine-tuned version of FacebookAI/roberta-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.6381
- F1: 0.7484
- Roc Auc: 0.8089
- Accuracy: 0.4819
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
Training results
Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
---|---|---|---|---|---|---|
0.5255 | 1.0 | 139 | 0.4719 | 0.4630 | 0.6487 | 0.3177 |
0.4113 | 2.0 | 278 | 0.3648 | 0.6966 | 0.7646 | 0.4513 |
0.3089 | 3.0 | 417 | 0.3617 | 0.6980 | 0.7689 | 0.4422 |
0.1934 | 4.0 | 556 | 0.3952 | 0.7089 | 0.7656 | 0.4458 |
0.1829 | 5.0 | 695 | 0.3931 | 0.7257 | 0.7871 | 0.4549 |
0.1347 | 6.0 | 834 | 0.4276 | 0.6949 | 0.7681 | 0.4404 |
0.1027 | 7.0 | 973 | 0.4205 | 0.7311 | 0.7935 | 0.4621 |
0.0771 | 8.0 | 1112 | 0.4617 | 0.7286 | 0.7917 | 0.4567 |
0.0642 | 9.0 | 1251 | 0.4679 | 0.7355 | 0.8056 | 0.4567 |
0.0493 | 10.0 | 1390 | 0.5254 | 0.7186 | 0.7834 | 0.4549 |
0.0339 | 11.0 | 1529 | 0.5343 | 0.7250 | 0.7909 | 0.4621 |
0.0272 | 12.0 | 1668 | 0.5412 | 0.7245 | 0.7856 | 0.4747 |
0.0225 | 13.0 | 1807 | 0.5775 | 0.7319 | 0.7936 | 0.4621 |
0.0311 | 14.0 | 1946 | 0.5828 | 0.7440 | 0.8056 | 0.4747 |
0.0091 | 15.0 | 2085 | 0.5922 | 0.7351 | 0.7978 | 0.4711 |
0.0077 | 16.0 | 2224 | 0.6233 | 0.7254 | 0.7889 | 0.4711 |
0.0075 | 17.0 | 2363 | 0.6304 | 0.7277 | 0.7909 | 0.4765 |
0.0047 | 18.0 | 2502 | 0.6235 | 0.7335 | 0.7996 | 0.4765 |
0.0041 | 19.0 | 2641 | 0.6322 | 0.7405 | 0.8015 | 0.4747 |
0.0036 | 20.0 | 2780 | 0.6420 | 0.7368 | 0.7982 | 0.4711 |
0.0033 | 21.0 | 2919 | 0.6381 | 0.7484 | 0.8089 | 0.4819 |
0.003 | 22.0 | 3058 | 0.6516 | 0.7438 | 0.8056 | 0.4747 |
0.003 | 23.0 | 3197 | 0.6617 | 0.7379 | 0.7993 | 0.4675 |
0.0028 | 24.0 | 3336 | 0.6647 | 0.7422 | 0.8032 | 0.4819 |
0.0028 | 25.0 | 3475 | 0.6717 | 0.7414 | 0.8016 | 0.4747 |
0.003 | 26.0 | 3614 | 0.6629 | 0.7406 | 0.8026 | 0.4711 |
0.003 | 27.0 | 3753 | 0.6657 | 0.7440 | 0.8038 | 0.4747 |
0.0026 | 28.0 | 3892 | 0.6662 | 0.7443 | 0.8039 | 0.4693 |
0.0033 | 29.0 | 4031 | 0.6673 | 0.7437 | 0.8034 | 0.4711 |
0.0026 | 30.0 | 4170 | 0.6673 | 0.7437 | 0.8034 | 0.4711 |
Framework versions
- Transformers 4.48.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.21.0
- Downloads last month
- 9
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for sercetexam9/UIT-NO-PREroberta-base-finetuned
Base model
FacebookAI/roberta-base