speecht5_finetuned_Aumkesh_English_tts

This model is a fine-tuned version of microsoft/speecht5_tts on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5115

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 600
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.5638 3.6866 100 0.5287
0.5039 7.3733 200 0.5102
0.4708 11.0599 300 0.5067
0.4568 14.7465 400 0.5067
0.4428 18.4332 500 0.5150
0.4407 22.1198 600 0.5115

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.5.0+cu121
  • Datasets 3.0.2
  • Tokenizers 0.19.1
Downloads last month
98
Safetensors
Model size
144M params
Tensor type
F32
ยท
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Aumkeshchy2003/speecht5_finetuned_Aumkesh_English_tts

Finetuned
(921)
this model

Spaces using Aumkeshchy2003/speecht5_finetuned_Aumkesh_English_tts 2