roberta-large-finetuned-augmentation-LUNAR-TAPT-MICRO
This model is a fine-tuned version of FacebookAI/roberta-large on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4895
- F1: 0.8563
- Roc Auc: 0.8926
- Accuracy: 0.6522
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
---|---|---|---|---|---|---|
0.3483 | 1.0 | 317 | 0.3076 | 0.7777 | 0.8325 | 0.5118 |
0.2331 | 2.0 | 634 | 0.2906 | 0.8011 | 0.8453 | 0.5513 |
0.1736 | 3.0 | 951 | 0.2906 | 0.8187 | 0.8659 | 0.5662 |
0.1174 | 4.0 | 1268 | 0.2952 | 0.8286 | 0.8695 | 0.5962 |
0.0857 | 5.0 | 1585 | 0.3265 | 0.8326 | 0.8755 | 0.6104 |
0.0574 | 6.0 | 1902 | 0.3470 | 0.8295 | 0.8692 | 0.6065 |
0.0455 | 7.0 | 2219 | 0.3953 | 0.8354 | 0.8764 | 0.6065 |
0.033 | 8.0 | 2536 | 0.4079 | 0.8328 | 0.8733 | 0.6151 |
0.0119 | 9.0 | 2853 | 0.4188 | 0.8468 | 0.8859 | 0.6285 |
0.0173 | 10.0 | 3170 | 0.4492 | 0.8476 | 0.8913 | 0.6246 |
0.0034 | 11.0 | 3487 | 0.4630 | 0.8488 | 0.8916 | 0.6230 |
0.0035 | 12.0 | 3804 | 0.4759 | 0.8531 | 0.8939 | 0.6341 |
0.0046 | 13.0 | 4121 | 0.4858 | 0.8487 | 0.8874 | 0.6293 |
0.0076 | 14.0 | 4438 | 0.4798 | 0.8542 | 0.8926 | 0.6427 |
0.0036 | 15.0 | 4755 | 0.4899 | 0.8512 | 0.8888 | 0.6356 |
0.0008 | 16.0 | 5072 | 0.4882 | 0.8543 | 0.8925 | 0.6443 |
0.0016 | 17.0 | 5389 | 0.4895 | 0.8563 | 0.8926 | 0.6522 |
0.0008 | 18.0 | 5706 | 0.4894 | 0.8561 | 0.8934 | 0.6498 |
0.0006 | 19.0 | 6023 | 0.4905 | 0.8550 | 0.8930 | 0.6475 |
0.0014 | 20.0 | 6340 | 0.4903 | 0.8555 | 0.8933 | 0.6483 |
Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
- Downloads last month
- 20
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for sercetexam9/roberta-large-finetuned-augmentation-LUNAR-TAPT-MICRO
Base model
FacebookAI/roberta-large