Krisbiantoro commited on
Commit
9f6814a
·
verified ·
1 Parent(s): e101746

Model save

Browse files
Files changed (1) hide show
  1. README.md +16 -7
README.md CHANGED
@@ -20,7 +20,7 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.8933
24
 
25
  ## Model description
26
 
@@ -40,23 +40,32 @@ More information needed
40
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 0.0001
43
- - train_batch_size: 2
44
- - eval_batch_size: 1
45
  - seed: 42
46
  - gradient_accumulation_steps: 32
47
- - total_train_batch_size: 64
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: cosine
50
  - lr_scheduler_warmup_ratio: 0.03
51
- - num_epochs: 1
52
  - mixed_precision_training: Native AMP
53
 
54
  ### Training results
55
 
56
  | Training Loss | Epoch | Step | Validation Loss |
57
  |:-------------:|:-----:|:----:|:---------------:|
58
- | 0.9453 | 0.38 | 50 | 0.9288 |
59
- | 0.8869 | 0.76 | 100 | 0.8933 |
 
 
 
 
 
 
 
 
 
60
 
61
 
62
  ### Framework versions
 
20
 
21
  This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.9585
24
 
25
  ## Model description
26
 
 
40
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 0.0001
43
+ - train_batch_size: 4
44
+ - eval_batch_size: 2
45
  - seed: 42
46
  - gradient_accumulation_steps: 32
47
+ - total_train_batch_size: 128
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: cosine
50
  - lr_scheduler_warmup_ratio: 0.03
51
+ - num_epochs: 2
52
  - mixed_precision_training: Native AMP
53
 
54
  ### Training results
55
 
56
  | Training Loss | Epoch | Step | Validation Loss |
57
  |:-------------:|:-----:|:----:|:---------------:|
58
+ | 1.1281 | 0.18 | 20 | 1.0894 |
59
+ | 1.0534 | 0.36 | 40 | 1.0328 |
60
+ | 1.0235 | 0.54 | 60 | 1.0056 |
61
+ | 1.0012 | 0.72 | 80 | 0.9886 |
62
+ | 0.9931 | 0.9 | 100 | 0.9764 |
63
+ | 0.9241 | 1.08 | 120 | 0.9711 |
64
+ | 0.8974 | 1.26 | 140 | 0.9663 |
65
+ | 0.8971 | 1.44 | 160 | 0.9624 |
66
+ | 0.8978 | 1.62 | 180 | 0.9598 |
67
+ | 0.8786 | 1.8 | 200 | 0.9588 |
68
+ | 0.8886 | 1.98 | 220 | 0.9585 |
69
 
70
 
71
  ### Framework versions