Farabzadeh commited on
Commit
fa4af98
·
verified ·
1 Parent(s): 091356c

End of training

Browse files
README.md CHANGED
@@ -6,8 +6,6 @@ tags:
6
  model-index:
7
  - name: qa-bert-base-multilingual-uncased
8
  results: []
9
- datasets:
10
- - SajjadAyoubi/persian_qa
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -15,9 +13,9 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # qa-bert-base-multilingual-uncased
17
 
18
- This model is a fine-tuned version of [google-bert/bert-base-multilingual-uncased](https://huggingface.co/google-bert/bert-base-multilingual-uncased) on https://huggingface.co/datasets/SajjadAyoubi/persian_qa dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 1.7871
21
 
22
  ## Model description
23
 
@@ -36,21 +34,23 @@ More information needed
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
- - learning_rate: 3e-05
40
- - train_batch_size: 8
41
- - eval_batch_size: 8
42
  - seed: 42
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
- - num_epochs: 3
46
 
47
  ### Training results
48
 
49
  | Training Loss | Epoch | Step | Validation Loss |
50
  |:-------------:|:-----:|:----:|:---------------:|
51
- | 2.2437 | 1.0 | 1139 | 1.7038 |
52
- | 1.4946 | 2.0 | 2278 | 1.6015 |
53
- | 0.9703 | 3.0 | 3417 | 1.7871 |
 
 
54
 
55
 
56
  ### Framework versions
@@ -58,4 +58,4 @@ The following hyperparameters were used during training:
58
  - Transformers 4.42.4
59
  - Pytorch 2.3.1+cu121
60
  - Datasets 2.21.0
61
- - Tokenizers 0.19.1
 
6
  model-index:
7
  - name: qa-bert-base-multilingual-uncased
8
  results: []
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
13
 
14
  # qa-bert-base-multilingual-uncased
15
 
16
+ This model is a fine-tuned version of [google-bert/bert-base-multilingual-uncased](https://huggingface.co/google-bert/bert-base-multilingual-uncased) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 1.7136
19
 
20
  ## Model description
21
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - learning_rate: 1e-05
38
+ - train_batch_size: 16
39
+ - eval_batch_size: 16
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
+ - num_epochs: 5
44
 
45
  ### Training results
46
 
47
  | Training Loss | Epoch | Step | Validation Loss |
48
  |:-------------:|:-----:|:----:|:---------------:|
49
+ | 3.0763 | 1.0 | 570 | 1.8933 |
50
+ | 2.0611 | 2.0 | 1140 | 1.6730 |
51
+ | 1.7286 | 3.0 | 1710 | 1.6859 |
52
+ | 1.5198 | 4.0 | 2280 | 1.6814 |
53
+ | 1.3609 | 5.0 | 2850 | 1.7136 |
54
 
55
 
56
  ### Framework versions
 
58
  - Transformers 4.42.4
59
  - Pytorch 2.3.1+cu121
60
  - Datasets 2.21.0
61
+ - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7cce55dc6dad6897153a214d72a5bca2dae99722061837f19eba75f349ad28d7
3
  size 667092808
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa1998768ce12902f424504469ac7964c6b4bbb5446a5004a9cb5f64c4d3e8b1
3
  size 667092808
runs/Aug16_16-10-16_50bebba3fbc0/events.out.tfevents.1723824662.50bebba3fbc0.1317.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4fc6d93687da38922ca5aad8a567bf2acbf1ed8be53fda64b2b2c32820d462f5
3
- size 7469
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba332d724563f1c93adaa92c24e04be09c3ff02828c78d2b4cce447eb852f40e
3
+ size 7823