thelikhit commited on
Commit
cc3eb08
·
verified ·
1 Parent(s): a7ceb24

End of training

Browse files
README.md CHANGED
@@ -6,36 +6,9 @@ tags:
6
  - generated_from_trainer
7
  datasets:
8
  - gtzan
9
- metrics:
10
- - accuracy
11
- - precision
12
- - recall
13
- - f1
14
  model-index:
15
  - name: hubert-model-v1
16
- results:
17
- - task:
18
- name: Audio Classification
19
- type: audio-classification
20
- dataset:
21
- name: gtzan
22
- type: gtzan
23
- config: default
24
- split: train
25
- args: default
26
- metrics:
27
- - name: Accuracy
28
- type: accuracy
29
- value: 0.775
30
- - name: Precision
31
- type: precision
32
- value: 0.7724506443587075
33
- - name: Recall
34
- type: recall
35
- value: 0.775
36
- - name: F1
37
- type: f1
38
- value: 0.7598948943956552
39
  ---
40
 
41
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -45,11 +18,16 @@ should probably proofread and complete it, then remove this comment. -->
45
 
46
  This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the gtzan dataset.
47
  It achieves the following results on the evaluation set:
48
- - Loss: 0.8438
49
- - Accuracy: 0.775
50
- - Precision: 0.7725
51
- - Recall: 0.775
52
- - F1: 0.7599
 
 
 
 
 
53
 
54
  ## Model description
55
 
@@ -72,26 +50,14 @@ The following hyperparameters were used during training:
72
  - train_batch_size: 4
73
  - eval_batch_size: 4
74
  - seed: 42
75
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
 
 
76
  - lr_scheduler_type: linear
77
  - lr_scheduler_warmup_ratio: 0.1
78
- - num_epochs: 8
79
  - mixed_precision_training: Native AMP
80
 
81
- ### Training results
82
-
83
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
84
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
85
- | No log | 1.0 | 180 | 2.0025 | 0.29 | 0.2351 | 0.29 | 0.1935 |
86
- | No log | 2.0 | 360 | 1.6515 | 0.46 | 0.3961 | 0.46 | 0.3960 |
87
- | 1.8167 | 3.0 | 540 | 1.2595 | 0.6 | 0.5193 | 0.6 | 0.5449 |
88
- | 1.8167 | 4.0 | 720 | 1.1376 | 0.655 | 0.6648 | 0.655 | 0.6349 |
89
- | 1.8167 | 5.0 | 900 | 1.0759 | 0.71 | 0.7372 | 0.71 | 0.6947 |
90
- | 1.0396 | 6.0 | 1080 | 0.8932 | 0.78 | 0.8324 | 0.78 | 0.7705 |
91
- | 1.0396 | 7.0 | 1260 | 0.9236 | 0.75 | 0.7820 | 0.75 | 0.7314 |
92
- | 1.0396 | 8.0 | 1440 | 0.8438 | 0.775 | 0.7725 | 0.775 | 0.7599 |
93
-
94
-
95
  ### Framework versions
96
 
97
  - Transformers 4.47.1
 
6
  - generated_from_trainer
7
  datasets:
8
  - gtzan
 
 
 
 
 
9
  model-index:
10
  - name: hubert-model-v1
11
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
18
 
19
  This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the gtzan dataset.
20
  It achieves the following results on the evaluation set:
21
+ - eval_loss: 2.1979
22
+ - eval_accuracy: 0.245
23
+ - eval_precision: 0.2882
24
+ - eval_recall: 0.245
25
+ - eval_f1: 0.1828
26
+ - eval_runtime: 105.666
27
+ - eval_samples_per_second: 1.893
28
+ - eval_steps_per_second: 0.473
29
+ - epoch: 1.0
30
+ - step: 25
31
 
32
  ## Model description
33
 
 
50
  - train_batch_size: 4
51
  - eval_batch_size: 4
52
  - seed: 42
53
+ - gradient_accumulation_steps: 8
54
+ - total_train_batch_size: 32
55
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
56
  - lr_scheduler_type: linear
57
  - lr_scheduler_warmup_ratio: 0.1
58
+ - num_epochs: 10
59
  - mixed_precision_training: Native AMP
60
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61
  ### Framework versions
62
 
63
  - Transformers 4.47.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:466cfb52e67ca51121c39a8d7448519fef69b93d46e4c49033b55f4ed14ded93
3
  size 378310176
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb61257795cae4d6900ffe1fd9a4fb3a43bdd70f1c93b15d608bee079833ec4a
3
  size 378310176
runs/Jan24_16-27-12_307f96733aba/events.out.tfevents.1737736033.307f96733aba.2161.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f593c0a9046d3028b4bec17456cb25057694fbf511910736135c25caefa6278c
3
+ size 7229
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a14f602f7ce0ee0053524d247d06362aedf398f01ea7ec0e5c722bdca9bc1695
3
  size 5304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5c799319c9fee50e75ec7b90bd6d378a1bd967b218589df6a652e51e9031a88
3
  size 5304