whisper-small-amet / README.md
drmeeseeks's picture
Create README.md
c9bccb0
|
raw
history blame
6.32 kB
---
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper Small Amharic FLEURS
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs am_et
type: google/fleurs
config: am_et
split: test+validation
args: am_et
metrics:
- name: Wer
type: wer
value:
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Tamil FLEURS
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the google/fleurs am_et dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5390 (Validation Loss)
- Wer: 20.9327 (WER)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
This model was trained/evaluated on "test+validation" data from google/fleurs [google/fluers - HuggingFace Datasets](https://huggingface.co/datasets/google/fleurs).
## Training procedure
The training was done in Lambda Cloud GPU on A100/40GB GPUs, which were provided by OpenAI Community Events [Whisper Fine Tuning Event - Dec 2022](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#fine-tune-whisper). The training was done using [HuggingFace Community Events - Whisper - run_speech_recognition_seq2seq_streaming.py](https://github.com/huggingface/community-events/blob/main/whisper-fine-tuning-event/run_speech_recognition_seq2seq_streaming.py) using the included [whisper_python_am_et.ipynb](https://huggingface.co/drmeeseeks/whisper-small-am_et/blob/main/am_et_fine_tune_whisper_streaming_colab_RUNNING-evalerrir.ipynb) to setup the Lambda Cloud GPU/Colab environment. For Colab, you must reduce the train batch size to the recommended amount mentioned at , as the T4 GPUs have 16GB of memory [Whisper Fine Tuning Event - Dec 2022](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#fine-tune-whisper). The notebook sets up the environment, logs into your huggingface account, and generates a bash script. The bash script generated in the IPYNB, `run.sh` was run from the terminal to train `bash run.sh`, as described on the Whisper community events GITHUB page.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
- do_eval=False
### Training results
| Training Loss | Epoch | Step |
|:-------------:|:-----:|:-----:|
| 3.0968 | 3.57 | - |
| 1.178 | 28.57 | - |
| 0.03 | 53.57 | - |
| 0.0002 | 217.86 | - |
| 0.0001 | 378.57 | ~ 2000 |
| 0.0000 | 382.14 | - |
| 0.0000 | 467.86 | 3300 |
### Recommendations
Limit training duration for smaller datasets to ~ 2000 to 3000 steps to avoid overfitting. 5000 steps using the [HuggingFace - Whisper Small](https://huggingface.co/openai/whisper-small) takes ~ 5hrs on A100 GPUs. Encountered `RuntimeError: The size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1` which is related to [Trainer RuntimeError](https://discuss.huggingface.co/t/trainer-runtimeerror-the-size-of-tensor-a-462-must-match-the-size-of-tensor-b-448-at-non-singleton-dimension-1/26010) as some languages datasets have input lengths that have non-standard lengths. The link did not resolve my issue, and appears elsewhere too [Training languagemodel – RuntimeError the expanded size of the tensor (100) must match the existing size (64) at non singleton dimension 1](https://hungsblog.de/en/technology/troubleshooting/training-languagemodel-runtimeerror-the-expanded-size-of-the-tensor-100-must-match-the-existing-size-64-at-non-singleton-dimension-1/). To circumvent this issue, `run.sh` only trains and save the model. Then run `python run_eval_whisper_streaming.py --model_id="openai/whisper-small" --dataset="google/fleurs" --config="am_et" --device=0 --language="am"` to find the WER score. Erroring out during evaluation prevents the trained model from loading to HugginFace.
### Environmental Impact
Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). In total roughly 100 hours were used.
- __Hardware Type__: AMD EPYC 7J13 64-Core Processor (30 core VM) 197GB RAM, with NVIDIA A100-SXM 40GB
- __Hours Used__: 100 hrs
- __Cloud Provider__: Lambda Cloud GPU
- __Compute Region__: Virginia/India
- __Carbon Emitted__: 14.8 kg
### Citation
[Whisper - GITHUB](https://github.com/openai/whisper)
[Whisper - OpenAI - BLOG](https://openai.com/blog/whisper/)
[Model Card - HuggingFace Hub - GITHUB](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md)
```bibtex
@misc{https://doi.org/10.48550/arxiv.2212.04356,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
keywords = {Audio and Speech Processing (eess.AS), Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2