whisper-large-v3-pa / README.md
swayangjit's picture
End of training
92f326d verified
metadata
library_name: transformers
language:
  - pa
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: Punjabi Whisper large-v3  - Swayangjit
    results: []

Punjabi Whisper large-v3 - Swayangjit

This model is a fine-tuned version of openai/whisper-large-v3 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3908
  • Wer: 71.4286

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • training_steps: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.4502 0.0133 10 0.6460 91.9414
0.7124 0.0266 20 0.4013 72.8205
0.6185 0.0399 30 0.4096 79.7436
0.5898 0.0533 40 0.4439 124.3590
0.5579 0.0666 50 0.3908 71.4286

Framework versions

  • Transformers 4.47.1
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0