fawzanaramam commited on
Commit
4ba4139
·
verified ·
1 Parent(s): 0c186f1

End of training

Browse files
Files changed (3) hide show
  1. README.md +13 -37
  2. generation_config.json +10 -26
  3. model.safetensors +1 -1
README.md CHANGED
@@ -2,38 +2,30 @@
2
  language:
3
  - ar
4
  license: apache-2.0
5
- base_model: openai/whisper-small
6
  tags:
7
  - generated_from_trainer
8
  datasets:
9
  - fawzanaramam/the-amma-juz
10
- metrics:
11
- - wer
12
  model-index:
13
- - name: Whisper Small Finetuned on Last Chapters of Quran
14
- results:
15
- - task:
16
- name: Automatic Speech Recognition
17
- type: automatic-speech-recognition
18
- dataset:
19
- name: The Truth Last Chapters
20
- type: fawzanaramam/the-amma-juz
21
- args: 'config: ar, split: train'
22
- metrics:
23
- - name: Wer
24
- type: wer
25
- value: 5.250596658711217
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
  should probably proofread and complete it, then remove this comment. -->
30
 
31
- # Whisper Small Finetuned on Last Chapters of Quran
32
 
33
- This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the The Truth Last Chapters dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.0584
36
- - Wer: 5.2506
 
 
 
 
 
37
 
38
  ## Model description
39
 
@@ -59,25 +51,9 @@ The following hyperparameters were used during training:
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
  - lr_scheduler_warmup_steps: 10
62
- - training_steps: 100
63
  - mixed_precision_training: Native AMP
64
 
65
- ### Training results
66
-
67
- | Training Loss | Epoch | Step | Validation Loss | Wer |
68
- |:-------------:|:------:|:----:|:---------------:|:--------:|
69
- | No log | 0.0725 | 10 | 0.8505 | 117.6611 |
70
- | No log | 0.1449 | 20 | 0.2837 | 43.3174 |
71
- | 0.7614 | 0.2174 | 30 | 0.1635 | 33.8902 |
72
- | 0.7614 | 0.2899 | 40 | 0.1166 | 10.7399 |
73
- | 0.1326 | 0.3623 | 50 | 0.0962 | 12.0525 |
74
- | 0.1326 | 0.4348 | 60 | 0.0759 | 8.2339 |
75
- | 0.1326 | 0.5072 | 70 | 0.0681 | 14.2005 |
76
- | 0.1009 | 0.5797 | 80 | 0.0635 | 5.7279 |
77
- | 0.1009 | 0.6522 | 90 | 0.0601 | 5.4893 |
78
- | 0.0653 | 0.7246 | 100 | 0.0584 | 5.2506 |
79
-
80
-
81
  ### Framework versions
82
 
83
  - Transformers 4.41.1
 
2
  language:
3
  - ar
4
  license: apache-2.0
5
+ base_model: openai/whisper-medium
6
  tags:
7
  - generated_from_trainer
8
  datasets:
9
  - fawzanaramam/the-amma-juz
 
 
10
  model-index:
11
+ - name: Whisper Medium Finetuned on Amma Juz of Quran
12
+ results: []
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
  should probably proofread and complete it, then remove this comment. -->
17
 
18
+ # Whisper Medium Finetuned on Amma Juz of Quran
19
 
20
+ This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the The Truth Amma Juz dataset.
21
  It achieves the following results on the evaluation set:
22
+ - eval_loss: 0.0032
23
+ - eval_wer: 0.5102
24
+ - eval_runtime: 47.9061
25
+ - eval_samples_per_second: 2.087
26
+ - eval_steps_per_second: 0.271
27
+ - epoch: 0.6653
28
+ - step: 950
29
 
30
  ## Model description
31
 
 
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_steps: 10
54
+ - num_epochs: 3.0
55
  - mixed_precision_training: Native AMP
56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  ### Framework versions
58
 
59
  - Transformers 4.41.1
generation_config.json CHANGED
@@ -1,44 +1,28 @@
1
  {
2
  "alignment_heads": [
3
  [
4
- 5,
5
- 3
6
  ],
7
  [
8
- 5,
9
- 9
10
- ],
11
- [
12
- 8,
13
- 0
14
- ],
15
- [
16
- 8,
17
  4
18
  ],
19
  [
20
- 8,
21
- 7
22
  ],
23
  [
24
- 8,
25
- 8
26
  ],
27
  [
28
- 9,
29
  0
30
  ],
31
  [
32
- 9,
33
- 7
34
- ],
35
- [
36
- 9,
37
- 9
38
- ],
39
- [
40
- 10,
41
- 5
42
  ]
43
  ],
44
  "begin_suppress_tokens": [
 
1
  {
2
  "alignment_heads": [
3
  [
4
+ 13,
5
+ 15
6
  ],
7
  [
8
+ 15,
 
 
 
 
 
 
 
 
9
  4
10
  ],
11
  [
12
+ 15,
13
+ 15
14
  ],
15
  [
16
+ 16,
17
+ 1
18
  ],
19
  [
20
+ 20,
21
  0
22
  ],
23
  [
24
+ 23,
25
+ 4
 
 
 
 
 
 
 
 
26
  ]
27
  ],
28
  "begin_suppress_tokens": [
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3d84c4b2675dc60a13d28b6d3a83d7ddbd34db7481f2e79cb7749f90efe60ca8
3
  size 3055544304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3a2abd5ff337b8c90cd9f6d25d3866015765854afe3e5d722f8ba9f556e7715
3
  size 3055544304