SodaXII commited on
Commit
28595f4
·
verified ·
1 Parent(s): 7959889

Model save

Browse files
README.md CHANGED
@@ -4,8 +4,6 @@ license: apache-2.0
4
  base_model: google/vit-base-patch16-224
5
  tags:
6
  - generated_from_trainer
7
- metrics:
8
- - accuracy
9
  model-index:
10
  - name: vit-base-patch16-224_rice-leaf-disease-augmented-v2_fft
11
  results: []
@@ -18,8 +16,13 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Accuracy: 0.9196
22
- - Loss: 0.3620
 
 
 
 
 
23
 
24
  ## Model description
25
 
@@ -45,33 +48,9 @@ The following hyperparameters were used during training:
45
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: cosine_with_restarts
47
  - lr_scheduler_warmup_ratio: 0.1
48
- - num_epochs: 15
49
  - mixed_precision_training: Native AMP
50
 
51
- ### Training results
52
-
53
- | Training Loss | Epoch | Step | Accuracy | Validation Loss |
54
- |:-------------:|:-----:|:----:|:--------:|:---------------:|
55
- | 1.9482 | 1.0 | 125 | 0.5685 | 1.5012 |
56
- | 0.9894 | 2.0 | 250 | 0.7976 | 0.6444 |
57
- | 0.3321 | 3.0 | 375 | 0.8958 | 0.3859 |
58
- | 0.1115 | 4.0 | 500 | 0.9107 | 0.3081 |
59
- | 0.0387 | 5.0 | 625 | 0.9137 | 0.2980 |
60
- | 0.0204 | 6.0 | 750 | 0.9137 | 0.2936 |
61
- | 0.0169 | 7.0 | 875 | 0.9196 | 0.2953 |
62
- | 0.0078 | 8.0 | 1000 | 0.9226 | 0.3067 |
63
- | 0.0034 | 9.0 | 1125 | 0.9286 | 0.3087 |
64
- | 0.0025 | 10.0 | 1250 | 0.9196 | 0.3139 |
65
- | 0.0023 | 11.0 | 1375 | 0.9196 | 0.3142 |
66
- | 0.0019 | 12.0 | 1500 | 0.9196 | 0.3288 |
67
- | 0.0013 | 13.0 | 1625 | 0.9196 | 0.3359 |
68
- | 0.001 | 14.0 | 1750 | 0.9226 | 0.3413 |
69
- | 0.0009 | 15.0 | 1875 | 0.9226 | 0.3425 |
70
- | 0.0009 | 16.0 | 2000 | 0.9226 | 0.3481 |
71
- | 0.0007 | 17.0 | 2125 | 0.9226 | 0.3571 |
72
- | 0.0006 | 18.0 | 2250 | 0.9196 | 0.3620 |
73
-
74
-
75
  ### Framework versions
76
 
77
  - Transformers 4.48.3
 
4
  base_model: google/vit-base-patch16-224
5
  tags:
6
  - generated_from_trainer
 
 
7
  model-index:
8
  - name: vit-base-patch16-224_rice-leaf-disease-augmented-v2_fft
9
  results: []
 
16
 
17
  This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - epoch: 18.0
20
+ - eval_accuracy: 0.9196
21
+ - eval_loss: 0.3620
22
+ - eval_runtime: 11.0713
23
+ - eval_samples_per_second: 30.349
24
+ - eval_steps_per_second: 0.542
25
+ - step: 2250
26
 
27
  ## Model description
28
 
 
48
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
49
  - lr_scheduler_type: cosine_with_restarts
50
  - lr_scheduler_warmup_ratio: 0.1
51
+ - num_epochs: 19
52
  - mixed_precision_training: Native AMP
53
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
  ### Framework versions
55
 
56
  - Transformers 4.48.3
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 18.0,
3
+ "total_flos": 1.1159446583771136e+19,
4
+ "train_loss": 0.0,
5
+ "train_runtime": 27.7933,
6
+ "train_samples_per_second": 4317.59,
7
+ "train_steps_per_second": 67.462
8
+ }
logs/events.out.tfevents.1740444593.4a8a76f3e68a.1955.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67a825f92c14aa126ac37064870f8fd663e1c996671a18109f5dd9c31b65ca2c
3
+ size 88
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3e45518fd16a0cfc3f3db29de59fd516f8807a340ff671f0c2bd29aa81adca17
3
  size 343242432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ccfeb9fcbfcb69bb1dd3f91ab957cedb45851be3c8d25ba88c6576bc370a613
3
  size 343242432
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 18.0,
3
+ "total_flos": 1.1159446583771136e+19,
4
+ "train_loss": 0.0,
5
+ "train_runtime": 27.7933,
6
+ "train_samples_per_second": 4317.59,
7
+ "train_steps_per_second": 67.462
8
+ }
trainer_state.json CHANGED
@@ -295,6 +295,15 @@
295
  "eval_samples_per_second": 30.349,
296
  "eval_steps_per_second": 0.542,
297
  "step": 2250
 
 
 
 
 
 
 
 
 
298
  }
299
  ],
300
  "logging_steps": 500,
 
295
  "eval_samples_per_second": 30.349,
296
  "eval_steps_per_second": 0.542,
297
  "step": 2250
298
+ },
299
+ {
300
+ "epoch": 18.0,
301
+ "step": 2250,
302
+ "total_flos": 1.1159446583771136e+19,
303
+ "train_loss": 0.0,
304
+ "train_runtime": 27.7933,
305
+ "train_samples_per_second": 4317.59,
306
+ "train_steps_per_second": 67.462
307
  }
308
  ],
309
  "logging_steps": 500,
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d04abd6754084172d05ac2641c9aa43625d33f7ba671b44f748caf0a343b2f30
3
  size 5496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d91f12d56a4d2f791cdd3cb7ca4fca171c9aedb1d2d21170d89f7b4bafc29c7a
3
  size 5496