adenhaus commited on
Commit
0456169
·
verified ·
1 Parent(s): 269b67b

End of training

Browse files
Files changed (2) hide show
  1. README.md +13 -14
  2. model.safetensors +1 -1
README.md CHANGED
@@ -18,8 +18,8 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [MCG-NJU/videomae-small-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-small-finetuned-kinetics) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.2644
22
- - Accuracy: 0.8889
23
 
24
  ## Model description
25
 
@@ -45,27 +45,26 @@ The following hyperparameters were used during training:
45
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_ratio: 0.1
48
- - training_steps: 130
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
53
  |:-------------:|:------:|:----:|:---------------:|:--------:|
54
- | 0.6846 | 0.1077 | 14 | 0.6309 | 0.6667 |
55
- | 0.5857 | 1.1077 | 28 | 0.5324 | 0.7037 |
56
- | 0.3921 | 2.1077 | 42 | 0.5325 | 0.6296 |
57
- | 0.3011 | 3.1077 | 56 | 0.4113 | 0.8889 |
58
- | 0.2184 | 4.1077 | 70 | 0.3408 | 0.8889 |
59
- | 0.1523 | 5.1077 | 84 | 0.3739 | 0.8519 |
60
- | 0.1197 | 6.1077 | 98 | 0.3697 | 0.7778 |
61
- | 0.0763 | 7.1077 | 112 | 0.2562 | 0.8889 |
62
- | 0.0724 | 8.1077 | 126 | 0.2623 | 0.8889 |
63
- | 0.0685 | 9.0308 | 130 | 0.2644 | 0.8889 |
64
 
65
 
66
  ### Framework versions
67
 
68
  - Transformers 4.47.1
69
  - Pytorch 2.5.1+cu124
70
- - Datasets 3.2.0
71
  - Tokenizers 0.21.0
 
18
 
19
  This model is a fine-tuned version of [MCG-NJU/videomae-small-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-small-finetuned-kinetics) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.6398
22
+ - Accuracy: 0.7027
23
 
24
  ## Model description
25
 
 
45
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_ratio: 0.1
48
+ - training_steps: 190
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
53
  |:-------------:|:------:|:----:|:---------------:|:--------:|
54
+ | 1.0364 | 0.1053 | 20 | 1.0361 | 0.4324 |
55
+ | 0.8371 | 1.1053 | 40 | 0.9787 | 0.5676 |
56
+ | 0.6885 | 2.1053 | 60 | 0.9205 | 0.5676 |
57
+ | 0.5484 | 3.1053 | 80 | 0.7844 | 0.6216 |
58
+ | 0.452 | 4.1053 | 100 | 0.7905 | 0.5676 |
59
+ | 0.4008 | 5.1053 | 120 | 0.7258 | 0.6216 |
60
+ | 0.291 | 6.1053 | 140 | 0.7222 | 0.6486 |
61
+ | 0.211 | 7.1053 | 160 | 0.6974 | 0.7027 |
62
+ | 0.2003 | 8.1053 | 180 | 0.6377 | 0.7027 |
63
+ | 0.2124 | 9.0526 | 190 | 0.6398 | 0.7027 |
64
 
65
 
66
  ### Framework versions
67
 
68
  - Transformers 4.47.1
69
  - Pytorch 2.5.1+cu124
 
70
  - Tokenizers 0.21.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9b6ef88b09aba606d991e2c9bc02c9062498ad1112923246bf920f0fc96026f6
3
  size 87546468
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e4b99178c9c39cb14a05dbf10d87cd170160087b5692194d50450047fb8829c
3
  size 87546468