stonedsmv commited on
Commit
bfa3b29
·
verified ·
1 Parent(s): dead3f9

End of training

Browse files
README.md CHANGED
@@ -1,77 +1,67 @@
1
- ---
2
- license: mit
3
- base_model: gpt2
4
- tags:
5
- - generated_from_trainer
6
- metrics:
7
- - accuracy
8
- model-index:
9
- - name: GPT-2-Base
10
- results: []
11
- datasets:
12
- - Davlan/sib200
13
- language:
14
- - lt
15
- - en
16
- pipeline_tag: text-classification
17
- ---
18
-
19
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
20
- should probably proofread and complete it, then remove this comment. -->
21
-
22
- # GPT-2-Base
23
-
24
- This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
25
- It achieves the following results on the evaluation set:
26
- - Loss: 1.1649
27
- - Accuracy: 0.6220
28
-
29
- ## Model description
30
-
31
- More information needed
32
-
33
- ## Intended uses & limitations
34
-
35
- More information needed
36
-
37
- ## Training and evaluation data
38
-
39
- More information needed
40
-
41
- ## Training procedure
42
-
43
- ### Training hyperparameters
44
-
45
- The following hyperparameters were used during training:
46
- - learning_rate: 5e-05
47
- - train_batch_size: 16
48
- - eval_batch_size: 16
49
- - seed: 42
50
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
- - lr_scheduler_type: linear
52
- - num_epochs: 12
53
-
54
- ### Training results
55
-
56
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
57
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
58
- | No log | 1.0 | 13 | 1.4248 | 0.4512 |
59
- | No log | 2.0 | 26 | 1.1649 | 0.6220 |
60
- | No log | 3.0 | 39 | 1.8808 | 0.2317 |
61
- | No log | 4.0 | 52 | 1.5877 | 0.4146 |
62
- | No log | 5.0 | 65 | 1.3365 | 0.5610 |
63
- | No log | 6.0 | 78 | 1.3562 | 0.3780 |
64
- | No log | 7.0 | 91 | 1.1908 | 0.4634 |
65
- | No log | 8.0 | 104 | 1.3376 | 0.5 |
66
- | No log | 9.0 | 117 | 1.5376 | 0.3659 |
67
- | No log | 10.0 | 130 | 1.4232 | 0.3780 |
68
- | No log | 11.0 | 143 | 2.3671 | 0.2195 |
69
- | No log | 12.0 | 156 | 1.6645 | 0.3293 |
70
-
71
-
72
- ### Framework versions
73
-
74
- - Transformers 4.44.0
75
- - Pytorch 2.4.0+cu124
76
- - Datasets 2.20.0
77
- - Tokenizers 0.19.1
 
1
+ ---
2
+ license: mit
3
+ base_model: gpt2
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ model-index:
9
+ - name: GPT-2-Base
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # GPT-2-Base
17
+
18
+ This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 1.4397
21
+ - Accuracy: 0.5
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 5e-05
41
+ - train_batch_size: 16
42
+ - eval_batch_size: 16
43
+ - seed: 42
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: linear
46
+ - num_epochs: 8
47
+
48
+ ### Training results
49
+
50
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
51
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
52
+ | No log | 1.0 | 44 | 1.8580 | 0.2598 |
53
+ | No log | 2.0 | 88 | 1.7766 | 0.2745 |
54
+ | No log | 3.0 | 132 | 1.6079 | 0.4069 |
55
+ | No log | 4.0 | 176 | 1.4671 | 0.5098 |
56
+ | No log | 5.0 | 220 | 1.5152 | 0.4804 |
57
+ | No log | 6.0 | 264 | 1.4397 | 0.5 |
58
+ | No log | 7.0 | 308 | 1.4590 | 0.5098 |
59
+ | No log | 8.0 | 352 | 1.5410 | 0.5049 |
60
+
61
+
62
+ ### Framework versions
63
+
64
+ - Transformers 4.44.0
65
+ - Pytorch 2.4.0+cu124
66
+ - Datasets 2.21.0
67
+ - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
 
 
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:eacb5f105dbd9c30532313faf6be0998fa3535a9a381295f39aece0bd2f7554d
3
  size 497795792
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ea126ae60cca6cca9d61148d21a321e21d9812a16bf3caf0ff82c4610bd5d17
3
  size 497795792
runs/Aug21_18-02-45_DESKTOP-7VL4NRO/events.out.tfevents.1724252567.DESKTOP-7VL4NRO.17828.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b3ea7eef695f1f66a712904ffdb76456eb9ff8fd2344f842a056c503285a7b38
3
- size 5823
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3268f0bbd7369a206e696f304dac8786a0a6b7fd22098a69944b806e2145a280
3
+ size 8432
runs/Aug21_18-02-45_DESKTOP-7VL4NRO/events.out.tfevents.1724255796.DESKTOP-7VL4NRO.17828.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0770cb261efc1e1361397a87e30f7f58680190802aa28e616ed5ec547611614b
3
+ size 411