Whisper base ar - Mohamed Ahmed-Mahmoud Nasser

This model is a fine-tuned version of openai/whisper-base on the private dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1082
  • Wer: 22.2325

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.2541 0.5319 500 0.2250 44.5224
0.1526 1.0638 1000 0.1526 31.0242
0.1344 1.5957 1500 0.1321 27.8826
0.1217 2.1277 2000 0.1197 24.7066
0.1044 2.6596 2500 0.1153 23.7975
0.0886 3.1915 3000 0.1140 23.9471
0.1053 3.7234 3500 0.1090 22.3245
0.0843 4.2553 4000 0.1082 22.2325

Framework versions

  • Transformers 4.49.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.3.2
  • Tokenizers 0.21.0
Downloads last month
104
Safetensors
Model size
72.6M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Mohamed2210/whisper-base-ar

Finetuned
(434)
this model

Evaluation results