SamuelM0422 commited on
Commit
8a46e26
·
verified ·
1 Parent(s): d464473

End of training

Browse files
Files changed (2) hide show
  1. README.md +34 -24
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
- base_model: ntu-spml/distilhubert
5
  tags:
6
  - generated_from_trainer
7
  datasets:
@@ -9,7 +9,7 @@ datasets:
9
  metrics:
10
  - accuracy
11
  model-index:
12
- - name: distilhubert-finetuned-gtzan
13
  results:
14
  - task:
15
  name: Audio Classification
@@ -23,18 +23,18 @@ model-index:
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
- value: 0.85
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
  should probably proofread and complete it, then remove this comment. -->
31
 
32
- # distilhubert-finetuned-gtzan
33
 
34
- This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
35
  It achieves the following results on the evaluation set:
36
- - Loss: 0.5543
37
- - Accuracy: 0.85
38
 
39
  ## Model description
40
 
@@ -54,34 +54,44 @@ More information needed
54
 
55
  The following hyperparameters were used during training:
56
  - learning_rate: 5e-05
57
- - train_batch_size: 8
58
- - eval_batch_size: 8
59
  - seed: 42
60
- - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
61
  - lr_scheduler_type: linear
62
  - lr_scheduler_warmup_ratio: 0.1
63
- - num_epochs: 10
64
  - mixed_precision_training: Native AMP
65
 
66
  ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
- | 1.9436 | 1.0 | 113 | 1.8921 | 0.49 |
71
- | 1.227 | 2.0 | 226 | 1.2195 | 0.66 |
72
- | 1.0689 | 3.0 | 339 | 0.9700 | 0.76 |
73
- | 0.6739 | 4.0 | 452 | 0.8438 | 0.76 |
74
- | 0.5337 | 5.0 | 565 | 0.6158 | 0.83 |
75
- | 0.4014 | 6.0 | 678 | 0.5391 | 0.83 |
76
- | 0.2463 | 7.0 | 791 | 0.5328 | 0.88 |
77
- | 0.1579 | 8.0 | 904 | 0.5207 | 0.87 |
78
- | 0.1845 | 9.0 | 1017 | 0.5466 | 0.86 |
79
- | 0.0921 | 10.0 | 1130 | 0.5543 | 0.85 |
 
 
 
 
 
 
 
 
 
 
80
 
81
 
82
  ### Framework versions
83
 
84
- - Transformers 4.48.3
85
- - Pytorch 2.5.1+cu124
86
- - Datasets 3.3.2
87
  - Tokenizers 0.21.0
 
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
+ base_model: SamuelM0422/distilhubert-finetuned-gtzan
5
  tags:
6
  - generated_from_trainer
7
  datasets:
 
9
  metrics:
10
  - accuracy
11
  model-index:
12
+ - name: distilhubert-finetuned-gtzan2
13
  results:
14
  - task:
15
  name: Audio Classification
 
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
+ value: 0.89
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
  should probably proofread and complete it, then remove this comment. -->
31
 
32
+ # distilhubert-finetuned-gtzan2
33
 
34
+ This model is a fine-tuned version of [SamuelM0422/distilhubert-finetuned-gtzan](https://huggingface.co/SamuelM0422/distilhubert-finetuned-gtzan) on the GTZAN dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 1.2277
37
+ - Accuracy: 0.89
38
 
39
  ## Model description
40
 
 
54
 
55
  The following hyperparameters were used during training:
56
  - learning_rate: 5e-05
57
+ - train_batch_size: 16
58
+ - eval_batch_size: 16
59
  - seed: 42
60
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
61
  - lr_scheduler_type: linear
62
  - lr_scheduler_warmup_ratio: 0.1
63
+ - num_epochs: 20
64
  - mixed_precision_training: Native AMP
65
 
66
  ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
+ | 0.279 | 1.0 | 50 | 0.4636 | 0.88 |
71
+ | 0.1597 | 2.0 | 100 | 0.3688 | 0.895 |
72
+ | 0.0882 | 3.0 | 150 | 0.4473 | 0.88 |
73
+ | 0.0486 | 4.0 | 200 | 0.5118 | 0.87 |
74
+ | 0.0341 | 5.0 | 250 | 0.4274 | 0.895 |
75
+ | 0.0058 | 6.0 | 300 | 0.5832 | 0.86 |
76
+ | 0.0017 | 7.0 | 350 | 0.5238 | 0.9 |
77
+ | 0.0004 | 8.0 | 400 | 0.6152 | 0.895 |
78
+ | 0.0001 | 9.0 | 450 | 0.6718 | 0.915 |
79
+ | 0.0 | 10.0 | 500 | 0.9763 | 0.875 |
80
+ | 0.0 | 11.0 | 550 | 1.0753 | 0.885 |
81
+ | 0.0 | 12.0 | 600 | 0.9361 | 0.905 |
82
+ | 0.1016 | 13.0 | 650 | 1.1638 | 0.89 |
83
+ | 0.0 | 14.0 | 700 | 1.1003 | 0.895 |
84
+ | 0.0 | 15.0 | 750 | 1.0716 | 0.89 |
85
+ | 0.0 | 16.0 | 800 | 1.1925 | 0.89 |
86
+ | 0.0609 | 17.0 | 850 | 1.1557 | 0.89 |
87
+ | 0.0 | 18.0 | 900 | 1.1128 | 0.89 |
88
+ | 0.0 | 19.0 | 950 | 1.2144 | 0.89 |
89
+ | 0.0 | 20.0 | 1000 | 1.2277 | 0.89 |
90
 
91
 
92
  ### Framework versions
93
 
94
+ - Transformers 4.47.0
95
+ - Pytorch 2.5.1+cu121
96
+ - Datasets 3.3.1
97
  - Tokenizers 0.21.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4299826f6618819fbde6c2b3df457d195bd1888b438a730af824b780c02a965e
3
  size 94771728
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93c3ed2a710edf61b71b0f8eb12e6b0be95455399c65d1002ca484f9db83c452
3
  size 94771728