jin-cheon commited on
Commit
213f476
·
verified ·
1 Parent(s): 9b4b859

Training complete

Browse files
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  license: apache-2.0
3
  base_model: Helsinki-NLP/opus-mt-en-fr
4
  tags:
@@ -23,7 +24,7 @@ model-index:
23
  metrics:
24
  - name: Bleu
25
  type: bleu
26
- value: 52.91210143343284
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -33,8 +34,9 @@ should probably proofread and complete it, then remove this comment. -->
33
 
34
  This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
35
  It achieves the following results on the evaluation set:
36
- - Loss: 0.8554
37
- - Bleu: 52.9121
 
38
 
39
  ## Model description
40
 
@@ -57,7 +59,7 @@ The following hyperparameters were used during training:
57
  - train_batch_size: 32
58
  - eval_batch_size: 64
59
  - seed: 42
60
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: linear
62
  - num_epochs: 3
63
  - mixed_precision_training: Native AMP
@@ -68,7 +70,7 @@ The following hyperparameters were used during training:
68
 
69
  ### Framework versions
70
 
71
- - Transformers 4.40.2
72
- - Pytorch 2.2.1+cu121
73
- - Datasets 2.19.1
74
- - Tokenizers 0.19.1
 
1
  ---
2
+ library_name: transformers
3
  license: apache-2.0
4
  base_model: Helsinki-NLP/opus-mt-en-fr
5
  tags:
 
24
  metrics:
25
  - name: Bleu
26
  type: bleu
27
+ value: 52.95760336320957
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
34
 
35
  This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
36
  It achieves the following results on the evaluation set:
37
+ - Loss: 0.8556
38
+ - Model Preparation Time: 0.0015
39
+ - Bleu: 52.9576
40
 
41
  ## Model description
42
 
 
59
  - train_batch_size: 32
60
  - eval_batch_size: 64
61
  - seed: 42
62
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
63
  - lr_scheduler_type: linear
64
  - num_epochs: 3
65
  - mixed_precision_training: Native AMP
 
70
 
71
  ### Framework versions
72
 
73
+ - Transformers 4.46.2
74
+ - Pytorch 2.1.0
75
+ - Datasets 3.1.0
76
+ - Tokenizers 0.20.3
generation_config.json CHANGED
@@ -12,5 +12,5 @@
12
  "num_beams": 4,
13
  "pad_token_id": 59513,
14
  "renormalize_logits": true,
15
- "transformers_version": "4.40.2"
16
  }
 
12
  "num_beams": 4,
13
  "pad_token_id": 59513,
14
  "renormalize_logits": true,
15
+ "transformers_version": "4.46.2"
16
  }
runs/Feb10_19-41-04_AfterShock/events.out.tfevents.1739190256.AfterShock.721286.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fbad59fd0dcc8cb901cb4aab7eae5579b0fae16d9acfd1830150ccfd53f0f79e
3
+ size 480