File size: 2,028 Bytes
bcf7682 ed168ad bcf7682 851d85d bcf7682 ed168ad bcf7682 851d85d bcf7682 00eac38 bcf7682 851d85d 2a73c0a 851d85d bcf7682 851d85d bcf7682 851d85d bcf7682 851d85d bcf7682 851d85d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
---
language:
- lt
license: mit
tags:
- generated_from_trainer
- text-to-speech
datasets:
- voxpopuli
base_model: microsoft/speecht5_tts
model-index:
- name: speecht5_finetuned_voxpopuli_lt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- validation Loss: 0.5676
- training loss: 0.38
-
## Model description
text-to-speech
## Intended uses & limitations
text to speech, stst models
## Training and evaluation data
finetuning using the voxpopuli dataset for the Lithuanian language,
in this case there were few speakers and few examples, so the training gives us 0.56 validation loss
and 0.38 of training loss, This means the model may not generalize well to new data it hasn't seen before.
To avoid overfitting, you can try some regularization techniques, such as dropout, batch normalization,
or model size reduction.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.443 | 380.95 | 1000 | 0.5600 |
| 0.4045 | 761.9 | 2000 | 0.5717 |
| 0.3877 | 1142.86 | 3000 | 0.5647 |
| 0.3845 | 1523.81 | 4000 | 0.5676 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3 |