ilokavat commited on
Commit
8e3ec4d
·
verified ·
1 Parent(s): c338c60

End of training

Browse files
Files changed (1) hide show
  1. README.md +17 -17
README.md CHANGED
@@ -18,8 +18,8 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.5759
22
- - Wer: 0.1630
23
 
24
  ## Model description
25
 
@@ -38,9 +38,9 @@ More information needed
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
- - learning_rate: 0.0001
42
- - train_batch_size: 8
43
- - eval_batch_size: 8
44
  - seed: 42
45
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: linear
@@ -50,18 +50,18 @@ The following hyperparameters were used during training:
50
 
51
  ### Training results
52
 
53
- | Training Loss | Epoch | Step | Validation Loss | Wer |
54
- |:-------------:|:------:|:----:|:---------------:|:------:|
55
- | 1.6889 | 0.5682 | 25 | 1.0935 | 0.2250 |
56
- | 0.5293 | 1.1364 | 50 | 0.7940 | 0.4637 |
57
- | 0.2548 | 1.7045 | 75 | 0.9126 | 0.2702 |
58
- | 0.1804 | 2.2727 | 100 | 0.6907 | 0.1767 |
59
- | 0.1173 | 2.8409 | 125 | 0.7136 | 0.1903 |
60
- | 0.0631 | 3.4091 | 150 | 0.7046 | 0.1966 |
61
- | 0.046 | 3.9773 | 175 | 0.6465 | 0.1872 |
62
- | 0.029 | 4.5455 | 200 | 0.6110 | 0.3502 |
63
- | 0.0107 | 5.1136 | 225 | 0.6141 | 0.1661 |
64
- | 0.006 | 5.6818 | 250 | 0.5759 | 0.1630 |
65
 
66
 
67
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 1.0557
22
+ - Wer: 0.1819
23
 
24
  ## Model description
25
 
 
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
+ - learning_rate: 1e-06
42
+ - train_batch_size: 16
43
+ - eval_batch_size: 4
44
  - seed: 42
45
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: linear
 
50
 
51
  ### Training results
52
 
53
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
54
+ |:-------------:|:-------:|:----:|:---------------:|:------:|
55
+ | 2.466 | 1.1364 | 25 | 2.3416 | 0.3281 |
56
+ | 1.7788 | 2.2727 | 50 | 1.5774 | 0.2671 |
57
+ | 1.2151 | 3.4091 | 75 | 1.2743 | 0.2429 |
58
+ | 0.9613 | 4.5455 | 100 | 1.1783 | 0.2135 |
59
+ | 0.8534 | 5.6818 | 125 | 1.1291 | 0.2008 |
60
+ | 0.7617 | 6.8182 | 150 | 1.1021 | 0.1924 |
61
+ | 0.77 | 7.9545 | 175 | 1.0804 | 0.1872 |
62
+ | 0.7212 | 9.0909 | 200 | 1.0665 | 0.1819 |
63
+ | 0.7186 | 10.2273 | 225 | 1.0592 | 0.1819 |
64
+ | 0.6799 | 11.3636 | 250 | 1.0557 | 0.1819 |
65
 
66
 
67
  ### Framework versions