whisper-finetuned-nan-tw-v3-torbo-20-epoch

This model is a fine-tuned version of openai/whisper-large-v3-turbo on the common_voice_17_0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2611
  • Wer: 100.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.2238 1.0616 500 0.2174 100.0
0.1871 2.1233 1000 0.2099 100.0
0.1606 3.1849 1500 0.2050 100.0
0.1315 4.2465 2000 0.2134 100.0
0.1087 5.3082 2500 0.2247 100.0
0.0969 6.3698 3000 0.2269 100.0
0.0851 7.4315 3500 0.2381 100.0
0.0793 8.4931 4000 0.2412 100.0
0.0737 9.5547 4500 0.2406 100.0
0.07 10.6164 5000 0.2452 99.9569
0.0669 11.6780 5500 0.2435 100.0
0.0623 12.7396 6000 0.2447 100.0
0.0605 13.8013 6500 0.2490 100.0
0.0563 14.8629 7000 0.2524 99.9569
0.0525 15.9245 7500 0.2536 100.0
0.0485 16.9862 8000 0.2554 100.0
0.0435 18.0468 8500 0.2591 100.0
0.0386 19.1084 9000 0.2611 100.0

Framework versions

  • Transformers 4.48.3
  • Pytorch 2.5.1+cu124
  • Datasets 3.3.1
  • Tokenizers 0.21.0
Downloads last month
33
Safetensors
Model size
809M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Rangers/whisper-finetuned-nan-tw-v3-torbo-20-epoch

Finetuned
(181)
this model

Evaluation results