aslon1213 commited on
Commit
a0cbcd6
·
verified ·
1 Parent(s): e890036

End of training

Browse files
Files changed (2) hide show
  1. README.md +23 -13
  2. generation_config.json +1 -1
README.md CHANGED
@@ -25,7 +25,7 @@ model-index:
25
  metrics:
26
  - name: Wer
27
  type: wer
28
- value: 35.94645555236442
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -35,8 +35,8 @@ should probably proofread and complete it, then remove this comment. -->
35
 
36
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 16.1 dataset.
37
  It achieves the following results on the evaluation set:
38
- - Loss: 0.3910
39
- - Wer: 35.9465
40
 
41
  ## Model description
42
 
@@ -62,23 +62,33 @@ The following hyperparameters were used during training:
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: linear
64
  - lr_scheduler_warmup_steps: 500
65
- - training_steps: 5000
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
- | Training Loss | Epoch | Step | Validation Loss | Wer |
71
- |:-------------:|:------:|:----:|:---------------:|:-------:|
72
- | 0.4411 | 0.0176 | 1000 | 0.5526 | 47.9128 |
73
- | 0.327 | 0.0352 | 2000 | 0.4648 | 41.1885 |
74
- | 0.2883 | 0.0528 | 3000 | 0.4286 | 37.6822 |
75
- | 0.2777 | 0.0704 | 4000 | 0.4037 | 36.9479 |
76
- | 0.2543 | 0.0880 | 5000 | 0.3910 | 35.9465 |
 
 
 
 
 
 
 
 
 
 
77
 
78
 
79
  ### Framework versions
80
 
81
- - Transformers 4.40.1
82
  - Pytorch 2.2.0
83
- - Datasets 2.19.0
84
  - Tokenizers 0.19.1
 
25
  metrics:
26
  - name: Wer
27
  type: wer
28
+ value: 30.20491240338149
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
35
 
36
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 16.1 dataset.
37
  It achieves the following results on the evaluation set:
38
+ - Loss: 0.3052
39
+ - Wer: 30.2049
40
 
41
  ## Model description
42
 
 
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: linear
64
  - lr_scheduler_warmup_steps: 500
65
+ - training_steps: 15000
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
71
+ |:-------------:|:------:|:-----:|:---------------:|:-------:|
72
+ | 0.4689 | 0.0210 | 1000 | 0.5616 | 48.2462 |
73
+ | 0.3234 | 0.0420 | 2000 | 0.4695 | 44.8210 |
74
+ | 0.3078 | 0.0630 | 3000 | 0.4184 | 38.8747 |
75
+ | 0.2845 | 0.0840 | 4000 | 0.3955 | 36.2861 |
76
+ | 0.2771 | 0.1050 | 5000 | 0.3720 | 35.5344 |
77
+ | 0.2459 | 0.1260 | 6000 | 0.3649 | 35.9415 |
78
+ | 0.2482 | 0.1470 | 7000 | 0.3499 | 34.3993 |
79
+ | 0.26 | 0.1680 | 8000 | 0.3389 | 32.9183 |
80
+ | 0.2128 | 0.1891 | 9000 | 0.3321 | 33.2493 |
81
+ | 0.2092 | 0.2101 | 10000 | 0.3215 | 31.4973 |
82
+ | 0.1942 | 0.2311 | 11000 | 0.3194 | 31.0465 |
83
+ | 0.1912 | 0.2521 | 12000 | 0.3184 | 31.2850 |
84
+ | 0.2199 | 0.2731 | 13000 | 0.3100 | 30.6395 |
85
+ | 0.1861 | 0.2941 | 14000 | 0.3059 | 30.8667 |
86
+ | 0.2344 | 0.3151 | 15000 | 0.3052 | 30.2049 |
87
 
88
 
89
  ### Framework versions
90
 
91
+ - Transformers 4.41.2
92
  - Pytorch 2.2.0
93
+ - Datasets 2.19.2
94
  - Tokenizers 0.19.1
generation_config.json CHANGED
@@ -261,5 +261,5 @@
261
  "transcribe": 50359,
262
  "translate": 50358
263
  },
264
- "transformers_version": "4.40.1"
265
  }
 
261
  "transcribe": 50359,
262
  "translate": 50358
263
  },
264
+ "transformers_version": "4.41.2"
265
  }