cs221-afro-xlmr-large-som-finetuned-10-epochs
This model is a fine-tuned version of Davlan/afro-xlmr-large on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.2615
- F1: 0.5646
- Roc Auc: 0.7421
- Accuracy: 0.5487
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
---|---|---|---|---|---|---|
0.3379 | 1.0 | 85 | 0.3154 | 0.0043 | 0.5011 | 0.3820 |
0.2835 | 2.0 | 170 | 0.2764 | 0.2997 | 0.5909 | 0.4572 |
0.2425 | 3.0 | 255 | 0.2445 | 0.5322 | 0.7127 | 0.5398 |
0.1957 | 4.0 | 340 | 0.2311 | 0.5256 | 0.7011 | 0.5560 |
0.1651 | 5.0 | 425 | 0.2427 | 0.5301 | 0.7157 | 0.5516 |
0.1361 | 6.0 | 510 | 0.2615 | 0.5646 | 0.7421 | 0.5487 |
0.1102 | 7.0 | 595 | 0.2524 | 0.5478 | 0.7257 | 0.5501 |
0.0977 | 8.0 | 680 | 0.2598 | 0.5384 | 0.7230 | 0.5516 |
0.0814 | 9.0 | 765 | 0.2643 | 0.5424 | 0.7211 | 0.5560 |
0.0724 | 10.0 | 850 | 0.2639 | 0.5397 | 0.7184 | 0.5575 |
Framework versions
- Transformers 4.48.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for sercetexam9/cs221-afro-xlmr-large-som-finetuned-10-epochs
Base model
Davlan/afro-xlmr-large