csikasote's picture
End of training
a13adfb verified
metadata
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
  - automatic-speech-recognition
  - bigcgen
  - mms
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: mms-1b-bigcgen-male-15hrs-model
    results: []

mms-1b-bigcgen-male-15hrs-model

This model is a fine-tuned version of facebook/mms-1b-all on the BIGCGEN - BEM dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4756
  • Wer: 0.4703

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 30.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
14.7965 0.1033 100 3.5227 1.0007
6.2688 0.2066 200 2.7700 1.0325
3.6179 0.3099 300 0.7532 0.6069
1.7779 0.4132 400 0.6508 0.5821
1.5595 0.5165 500 0.6249 0.5578
1.5884 0.6198 600 0.6142 0.5283
1.5532 0.7231 700 0.5929 0.5172
1.4021 0.8264 800 0.5996 0.5182
1.507 0.9298 900 0.5824 0.5121
1.5374 1.0331 1000 0.5615 0.5061
1.4139 1.1364 1100 0.5456 0.5066
1.4472 1.2397 1200 0.5177 0.4874
1.2958 1.3430 1300 0.5022 0.4871
1.3292 1.4463 1400 0.4984 0.4871
1.2062 1.5496 1500 0.4886 0.4799
1.1623 1.6529 1600 0.4811 0.4823
1.2759 1.7562 1700 0.4735 0.4677
1.1852 1.8595 1800 0.4986 0.4669
1.0712 1.9628 1900 0.5045 0.4845
1.2023 2.0661 2000 0.4755 0.4782
1.2275 2.1694 2100 0.4756 0.4705

Framework versions

  • Transformers 4.47.1
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0