ilokavat commited on
Commit
b7504b9
·
verified ·
1 Parent(s): 331aba0

End of training

Browse files
Files changed (1) hide show
  1. README.md +17 -15
README.md CHANGED
@@ -18,8 +18,8 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 1.0557
22
- - Wer: 0.1819
23
 
24
  ## Model description
25
 
@@ -38,30 +38,32 @@ More information needed
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
- - learning_rate: 1e-06
42
- - train_batch_size: 16
43
  - eval_batch_size: 4
44
  - seed: 42
45
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 25
48
- - training_steps: 250
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Wer |
54
  |:-------------:|:-------:|:----:|:---------------:|:------:|
55
- | 2.466 | 1.1364 | 25 | 2.3416 | 0.3281 |
56
- | 1.7788 | 2.2727 | 50 | 1.5774 | 0.2671 |
57
- | 1.2151 | 3.4091 | 75 | 1.2743 | 0.2429 |
58
- | 0.9613 | 4.5455 | 100 | 1.1783 | 0.2135 |
59
- | 0.8534 | 5.6818 | 125 | 1.1291 | 0.2008 |
60
- | 0.7617 | 6.8182 | 150 | 1.1021 | 0.1924 |
61
- | 0.77 | 7.9545 | 175 | 1.0804 | 0.1872 |
62
- | 0.7212 | 9.0909 | 200 | 1.0665 | 0.1819 |
63
- | 0.7186 | 10.2273 | 225 | 1.0592 | 0.1819 |
64
- | 0.6799 | 11.3636 | 250 | 1.0557 | 0.1819 |
 
 
65
 
66
 
67
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.5138
22
+ - Wer: 0.1293
23
 
24
  ## Model description
25
 
 
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
+ - learning_rate: 5e-06
42
+ - train_batch_size: 32
43
  - eval_batch_size: 4
44
  - seed: 42
45
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 25
48
+ - training_steps: 300
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Wer |
54
  |:-------------:|:-------:|:----:|:---------------:|:------:|
55
+ | 2.1712 | 2.2727 | 25 | 1.4956 | 0.2397 |
56
+ | 0.8938 | 4.5455 | 50 | 0.9926 | 0.1682 |
57
+ | 0.5426 | 6.8182 | 75 | 0.8672 | 0.1535 |
58
+ | 0.4142 | 9.0909 | 100 | 0.7799 | 0.1462 |
59
+ | 0.3002 | 11.3636 | 125 | 0.6942 | 0.1409 |
60
+ | 0.2155 | 13.6364 | 150 | 0.6290 | 0.1346 |
61
+ | 0.1524 | 15.9091 | 175 | 0.5800 | 0.1335 |
62
+ | 0.1109 | 18.1818 | 200 | 0.5496 | 0.1314 |
63
+ | 0.0826 | 20.4545 | 225 | 0.5317 | 0.1304 |
64
+ | 0.0681 | 22.7273 | 250 | 0.5210 | 0.1304 |
65
+ | 0.0575 | 25.0 | 275 | 0.5160 | 0.1293 |
66
+ | 0.053 | 27.2727 | 300 | 0.5138 | 0.1293 |
67
 
68
 
69
  ### Framework versions