Hubert-fine-tuned-persian3
This model is a fine-tuned version of facebook/hubert-base-ls960 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.6692
- Accuracy: 0.7864
- Precision: 0.7462
- Recall: 0.7132
- F1: 0.7293
- Precision Neutral: 0.8116
- Recall Neutral: 0.8358
- F1 Neutral: 0.8235
- Precision Anger: 0.7462
- Recall Anger: 0.7132
- F1 Anger: 0.7293
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Precision Neutral | Recall Neutral | F1 Neutral | Precision Anger | Recall Anger | F1 Anger |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.614 | 1.0 | 337 | 0.6135 | 0.6884 | 0.5838 | 0.7941 | 0.6729 | 0.8158 | 0.6169 | 0.7025 | 0.5838 | 0.7941 | 0.6729 |
0.5971 | 2.0 | 674 | 0.5672 | 0.7537 | 0.7978 | 0.5221 | 0.6311 | 0.7379 | 0.9104 | 0.8151 | 0.7978 | 0.5221 | 0.6311 |
0.5976 | 3.0 | 1011 | 0.5508 | 0.7685 | 0.7959 | 0.5735 | 0.6667 | 0.7573 | 0.9005 | 0.8227 | 0.7959 | 0.5735 | 0.6667 |
0.507 | 4.0 | 1348 | 0.6549 | 0.7774 | 0.7905 | 0.6103 | 0.6888 | 0.7716 | 0.8905 | 0.8268 | 0.7905 | 0.6103 | 0.6888 |
0.4879 | 5.0 | 1685 | 0.6317 | 0.7834 | 0.7299 | 0.7353 | 0.7326 | 0.82 | 0.8159 | 0.8180 | 0.7299 | 0.7353 | 0.7326 |
0.5124 | 6.0 | 2022 | 0.6706 | 0.7685 | 0.7959 | 0.5735 | 0.6667 | 0.7573 | 0.9005 | 0.8227 | 0.7959 | 0.5735 | 0.6667 |
0.5017 | 7.0 | 2359 | 0.6532 | 0.7596 | 0.6846 | 0.75 | 0.7158 | 0.8191 | 0.7662 | 0.7918 | 0.6846 | 0.75 | 0.7158 |
0.4941 | 8.0 | 2696 | 0.6682 | 0.7923 | 0.7619 | 0.7059 | 0.7328 | 0.8104 | 0.8507 | 0.8301 | 0.7619 | 0.7059 | 0.7328 |
0.4145 | 9.0 | 3033 | 0.6610 | 0.7893 | 0.7519 | 0.7132 | 0.7321 | 0.8125 | 0.8408 | 0.8264 | 0.7519 | 0.7132 | 0.7321 |
0.4579 | 10.0 | 3370 | 0.6692 | 0.7864 | 0.7462 | 0.7132 | 0.7293 | 0.8116 | 0.8358 | 0.8235 | 0.7462 | 0.7132 | 0.7293 |
Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 2.18.0
- Tokenizers 0.21.0
- Downloads last month
- 32
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model’s pipeline type.
Model tree for vargha/Hubert-fine-tuned-persian3
Base model
facebook/hubert-base-ls960