SemihDurmaz commited on
Commit
41530f0
·
verified ·
1 Parent(s): ecd8c93

End of training

Browse files
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - tr
4
+ license: apache-2.0
5
+ base_model: openai/whisper-small
6
+ tags:
7
+ - whisper-event
8
+ - generated_from_trainer
9
+ datasets:
10
+ - mozilla-foundation/common_voice_11_0
11
+ metrics:
12
+ - wer
13
+ model-index:
14
+ - name: Whisper Small TR - Semih v6
15
+ results:
16
+ - task:
17
+ name: Automatic Speech Recognition
18
+ type: automatic-speech-recognition
19
+ dataset:
20
+ name: Common Voice 11.0
21
+ type: mozilla-foundation/common_voice_11_0
22
+ config: tr
23
+ split: test
24
+ args: tr
25
+ metrics:
26
+ - name: Wer
27
+ type: wer
28
+ value: 17.225933798967507
29
+ ---
30
+
31
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
+ should probably proofread and complete it, then remove this comment. -->
33
+
34
+ # Whisper Small TR - Semih v6
35
+
36
+ This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
37
+ It achieves the following results on the evaluation set:
38
+ - Loss: 0.2502
39
+ - Wer: 17.2259
40
+
41
+ ## Model description
42
+
43
+ More information needed
44
+
45
+ ## Intended uses & limitations
46
+
47
+ More information needed
48
+
49
+ ## Training and evaluation data
50
+
51
+ More information needed
52
+
53
+ ## Training procedure
54
+
55
+ ### Training hyperparameters
56
+
57
+ The following hyperparameters were used during training:
58
+ - learning_rate: 1e-05
59
+ - train_batch_size: 32
60
+ - eval_batch_size: 8
61
+ - seed: 42
62
+ - gradient_accumulation_steps: 2
63
+ - total_train_batch_size: 64
64
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
+ - lr_scheduler_type: linear
66
+ - lr_scheduler_warmup_steps: 500
67
+ - training_steps: 1500
68
+ - mixed_precision_training: Native AMP
69
+
70
+ ### Training results
71
+
72
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
73
+ |:-------------:|:------:|:----:|:---------------:|:-------:|
74
+ | 0.0543 | 3.0337 | 1000 | 0.2502 | 17.2259 |
75
+
76
+
77
+ ### Framework versions
78
+
79
+ - Transformers 4.41.2
80
+ - Pytorch 2.3.0+cu121
81
+ - Datasets 2.20.0
82
+ - Tokenizers 0.19.1
runs/Jun27_17-04-07_bc9ee3aba6e7/events.out.tfevents.1719507866.bc9ee3aba6e7.1436.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b165e92d9a987090c54ebb0b408a748090c625e27e3513cf09547bb672eac54d
3
- size 14387
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63f0a0e434428d9900b11a78cb8b8a86461d307863e4411eacea62a5b9028e87
3
+ size 18961