Enyonam commited on
Commit
a8fc999
·
1 Parent(s): 4e2ac8c

End of training

Browse files
README.md CHANGED
@@ -1,5 +1,6 @@
1
  ---
2
- base_model: huawei-noah/TinyBERT_General_4L_312D
 
3
  tags:
4
  - generated_from_trainer
5
  metrics:
@@ -14,10 +15,10 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # NLP_Capstone
16
 
17
- This model is a fine-tuned version of [huawei-noah/TinyBERT_General_4L_312D](https://huggingface.co/huawei-noah/TinyBERT_General_4L_312D) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.3046
20
- - Accuracy: 0.8807
21
 
22
  ## Model description
23
 
@@ -37,8 +38,8 @@ More information needed
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 2e-05
40
- - train_batch_size: 32
41
- - eval_batch_size: 32
42
  - seed: 42
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
@@ -46,18 +47,18 @@ The following hyperparameters were used during training:
46
 
47
  ### Training results
48
 
49
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
50
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
51
- | 0.4323 | 1.0 | 623 | 0.3441 | 0.8573 |
52
- | 0.3047 | 2.0 | 1246 | 0.3249 | 0.8699 |
53
- | 0.2598 | 3.0 | 1869 | 0.3046 | 0.8807 |
54
- | 0.2298 | 4.0 | 2492 | 0.3078 | 0.8828 |
55
- | 0.2086 | 5.0 | 3115 | 0.3129 | 0.8824 |
56
 
57
 
58
  ### Framework versions
59
 
60
- - Transformers 4.34.1
61
  - Pytorch 2.1.0+cu118
62
  - Datasets 2.14.6
63
  - Tokenizers 0.14.1
 
1
  ---
2
+ license: mit
3
+ base_model: roberta-base
4
  tags:
5
  - generated_from_trainer
6
  metrics:
 
15
 
16
  # NLP_Capstone
17
 
18
+ This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.2569
21
+ - Accuracy: 0.9311
22
 
23
  ## Model description
24
 
 
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 2e-05
41
+ - train_batch_size: 8
42
+ - eval_batch_size: 8
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
 
47
 
48
  ### Training results
49
 
50
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
51
+ |:-------------:|:-----:|:-----:|:---------------:|:--------:|
52
+ | 0.2645 | 1.0 | 2491 | 0.2569 | 0.9311 |
53
+ | 0.19 | 2.0 | 4982 | 0.3083 | 0.9301 |
54
+ | 0.1172 | 3.0 | 7473 | 0.3950 | 0.9307 |
55
+ | 0.0654 | 4.0 | 9964 | 0.4016 | 0.9390 |
56
+ | 0.0208 | 5.0 | 12455 | 0.4682 | 0.9376 |
57
 
58
 
59
  ### Framework versions
60
 
61
+ - Transformers 4.35.0
62
  - Pytorch 2.1.0+cu118
63
  - Datasets 2.14.6
64
  - Tokenizers 0.14.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a7300275e9eabf455d20e5b4438bfffc5108b5e26ea9110968716721f9cdda27
3
  size 498612824
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18752dde6d10b26ec086389e9909621fafa5b325e7894fc21ff09b6d9d372dec
3
  size 498612824
runs/Nov02_22-18-36_8737697ea2eb/events.out.tfevents.1698963580.8737697ea2eb.401.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1ceddc86a0d2decf9b6cf2c5ff0bf46bba6d49a7cccdeb9e2b779d3cfe5e687b
3
- size 8474
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa90fa315c5ffe1556265c8b5d3123916dbb3fe61aa33d5e399dc6535db9d414
3
+ size 9936
runs/Nov02_22-18-36_8737697ea2eb/events.out.tfevents.1698974520.8737697ea2eb.401.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b74c8e583e0cb85071f32bec0f5954c28bc8a51d0fbd71b6b0a7fb5fcfdc3d02
3
+ size 411