drmeeseeks commited on
Commit
5660e05
·
1 Parent(s): 0997be6

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -86
README.md CHANGED
@@ -1,40 +1,20 @@
1
  ---
2
  license: apache-2.0
3
  tags:
4
- - whisper-event
5
  - generated_from_trainer
6
  datasets:
7
- - google/fleurs
8
- metrics:
9
- - wer
10
  model-index:
11
- - name: Whisper Small Amharic FLEURS
12
- results:
13
- - task:
14
- name: Automatic Speech Recognition
15
- type: automatic-speech-recognition
16
- dataset:
17
- name: google/fleurs am_et
18
- type: google/fleurs
19
- config: am_et
20
- split: test+validation
21
- args: am_et
22
- metrics:
23
- - name: Wer
24
- type: wer
25
- value:
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
  should probably proofread and complete it, then remove this comment. -->
30
 
 
31
 
32
- # Whisper Small Tamil FLEURS
33
-
34
- This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the google/fleurs am_et dataset.
35
- It achieves the following results on the evaluation set:
36
- - Loss: 0.5390 (Validation Loss)
37
- - Wer: 20.9327 (WER)
38
 
39
  ## Model description
40
 
@@ -46,12 +26,10 @@ More information needed
46
 
47
  ## Training and evaluation data
48
 
49
- This model was trained/evaluated on "test+validation" data from google/fleurs [google/fluers - HuggingFace Datasets](https://huggingface.co/datasets/google/fleurs).
50
 
51
  ## Training procedure
52
 
53
- The training was done in Lambda Cloud GPU on A100/40GB GPUs, which were provided by OpenAI Community Events [Whisper Fine Tuning Event - Dec 2022](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#fine-tune-whisper). The training was done using [HuggingFace Community Events - Whisper - run_speech_recognition_seq2seq_streaming.py](https://github.com/huggingface/community-events/blob/main/whisper-fine-tuning-event/run_speech_recognition_seq2seq_streaming.py) using the included [whisper_python_am_et.ipynb](https://huggingface.co/drmeeseeks/whisper-small-am_et/blob/main/am_et_fine_tune_whisper_streaming_colab_RUNNING-evalerrir.ipynb) to setup the Lambda Cloud GPU/Colab environment. For Colab, you must reduce the train batch size to the recommended amount mentioned at , as the T4 GPUs have 16GB of memory [Whisper Fine Tuning Event - Dec 2022](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#fine-tune-whisper). The notebook sets up the environment, logs into your huggingface account, and generates a bash script. The bash script generated in the IPYNB, `run.sh` was run from the terminal to train `bash run.sh`, as described on the Whisper community events GITHUB page.
54
-
55
  ### Training hyperparameters
56
 
57
  The following hyperparameters were used during training:
@@ -59,75 +37,19 @@ The following hyperparameters were used during training:
59
  - train_batch_size: 64
60
  - eval_batch_size: 32
61
  - seed: 42
62
- - gradient_accumulation_steps: 2
63
- - total_train_batch_size: 64
64
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
  - lr_scheduler_type: linear
66
  - lr_scheduler_warmup_steps: 500
67
- - training_steps: 5000
68
  - mixed_precision_training: Native AMP
69
- - do_eval=False
70
-
71
 
72
  ### Training results
73
 
74
- | Training Loss | Epoch | Step |
75
- |:-------------:|:-----:|:-----:|
76
- | 3.0968 | 3.57 | - |
77
- | 1.178 | 28.57 | - |
78
- | 0.03 | 53.57 | - |
79
- | 0.0002 | 217.86 | - |
80
- | 0.0001 | 378.57 | ~ 2000 |
81
- | 0.0000 | 382.14 | - |
82
- | 0.0000 | 467.86 | 3300 |
83
-
84
-
85
- ### Recommendations
86
-
87
- Limit training duration for smaller datasets to ~ 2000 to 3000 steps to avoid overfitting. 5000 steps using the [HuggingFace - Whisper Small](https://huggingface.co/openai/whisper-small) takes ~ 5hrs on A100 GPUs. Encountered `RuntimeError: The size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1` which is related to [Trainer RuntimeError](https://discuss.huggingface.co/t/trainer-runtimeerror-the-size-of-tensor-a-462-must-match-the-size-of-tensor-b-448-at-non-singleton-dimension-1/26010) as some languages datasets have input lengths that have non-standard lengths. The link did not resolve my issue, and appears elsewhere too [Training languagemodel – RuntimeError the expanded size of the tensor (100) must match the existing size (64) at non singleton dimension 1](https://hungsblog.de/en/technology/troubleshooting/training-languagemodel-runtimeerror-the-expanded-size-of-the-tensor-100-must-match-the-existing-size-64-at-non-singleton-dimension-1/). To circumvent this issue, `run.sh` only trains and save the model. Then run `python run_eval_whisper_streaming.py --model_id="openai/whisper-small" --dataset="google/fleurs" --config="am_et" --device=0 --language="am"` to find the WER score. Erroring out during evaluation prevents the trained model from loading to HugginFace.
88
-
89
- ### Environmental Impact
90
-
91
- Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). In total roughly 100 hours were used.
92
-
93
- - __Hardware Type__: AMD EPYC 7J13 64-Core Processor (30 core VM) 197GB RAM, with NVIDIA A100-SXM 40GB
94
- - __Hours Used__: 100 hrs
95
- - __Cloud Provider__: Lambda Cloud GPU
96
- - __Compute Region__: Virginia/India
97
- - __Carbon Emitted__: 14.8 kg
98
-
99
-
100
- ### Citation
101
-
102
- [Whisper - GITHUB](https://github.com/openai/whisper)
103
- [Whisper - OpenAI - BLOG](https://openai.com/blog/whisper/)
104
- [Model Card - HuggingFace Hub - GITHUB](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md)
105
-
106
- ```bibtex
107
- @misc{https://doi.org/10.48550/arxiv.2212.04356,
108
- doi = {10.48550/ARXIV.2212.04356},
109
-
110
- url = {https://arxiv.org/abs/2212.04356},
111
-
112
- author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
113
-
114
- keywords = {Audio and Speech Processing (eess.AS), Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Computer and information sciences, FOS: Computer and information sciences},
115
-
116
- title = {Robust Speech Recognition via Large-Scale Weak Supervision},
117
-
118
- publisher = {arXiv},
119
-
120
- year = {2022},
121
-
122
- copyright = {arXiv.org perpetual, non-exclusive license}
123
- }
124
-
125
- ```
126
 
127
 
128
  ### Framework versions
129
 
130
  - Transformers 4.26.0.dev0
131
- - Pytorch 1.13.0+cu117
132
  - Datasets 2.7.1.dev0
133
  - Tokenizers 0.13.2
 
1
  ---
2
  license: apache-2.0
3
  tags:
 
4
  - generated_from_trainer
5
  datasets:
6
+ - fleurs
 
 
7
  model-index:
8
+ - name: whisper-small-amet
9
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
+ # whisper-small-amet
16
 
17
+ This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the fleurs dataset.
 
 
 
 
 
18
 
19
  ## Model description
20
 
 
26
 
27
  ## Training and evaluation data
28
 
29
+ More information needed
30
 
31
  ## Training procedure
32
 
 
 
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
 
37
  - train_batch_size: 64
38
  - eval_batch_size: 32
39
  - seed: 42
 
 
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
  - lr_scheduler_warmup_steps: 500
43
+ - training_steps: 1000
44
  - mixed_precision_training: Native AMP
 
 
45
 
46
  ### Training results
47
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
 
50
  ### Framework versions
51
 
52
  - Transformers 4.26.0.dev0
53
+ - Pytorch 1.13.1+cu117
54
  - Datasets 2.7.1.dev0
55
  - Tokenizers 0.13.2