gsmyrnis commited on
Commit
832378e
·
verified ·
1 Parent(s): 8210d72

Model save

Browse files
Files changed (2) hide show
  1. README.md +9 -15
  2. generation_config.json +10 -5
README.md CHANGED
@@ -1,10 +1,9 @@
1
  ---
2
  library_name: transformers
3
- license: llama3.1
4
- base_model: meta-llama/Meta-Llama-3.1-8B
5
  tags:
6
  - llama-factory
7
- - full
8
  - generated_from_trainer
9
  model-index:
10
  - name: llama3-1_8b_4o_annotated_olympiads
@@ -16,9 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # llama3-1_8b_4o_annotated_olympiads
18
 
19
- This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the mlfoundations-dev/4o_annotated_olympiads dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 0.7641
22
 
23
  ## Model description
24
 
@@ -37,25 +34,22 @@ More information needed
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
- - learning_rate: 5e-06
41
- - train_batch_size: 16
42
  - eval_batch_size: 8
43
  - seed: 42
44
  - distributed_type: multi-GPU
45
  - num_devices: 32
46
- - total_train_batch_size: 512
 
47
  - total_eval_batch_size: 256
48
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
49
- - lr_scheduler_type: constant
 
50
  - num_epochs: 3.0
51
 
52
  ### Training results
53
 
54
- | Training Loss | Epoch | Step | Validation Loss |
55
- |:-------------:|:-----:|:----:|:---------------:|
56
- | 0.8464 | 1.0 | 25 | 0.8046 |
57
- | 0.7628 | 2.0 | 50 | 0.7725 |
58
- | 0.7237 | 3.0 | 75 | 0.7641 |
59
 
60
 
61
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
+ license: apache-2.0
4
+ base_model: Qwen/Qwen2.5-7B-Instruct
5
  tags:
6
  - llama-factory
 
7
  - generated_from_trainer
8
  model-index:
9
  - name: llama3-1_8b_4o_annotated_olympiads
 
15
 
16
  # llama3-1_8b_4o_annotated_olympiads
17
 
18
+ This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on an unknown dataset.
 
 
19
 
20
  ## Model description
21
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - learning_rate: 1e-05
38
+ - train_batch_size: 1
39
  - eval_batch_size: 8
40
  - seed: 42
41
  - distributed_type: multi-GPU
42
  - num_devices: 32
43
+ - gradient_accumulation_steps: 3
44
+ - total_train_batch_size: 96
45
  - total_eval_batch_size: 256
46
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
+ - lr_scheduler_type: cosine
48
+ - lr_scheduler_warmup_ratio: 0.1
49
  - num_epochs: 3.0
50
 
51
  ### Training results
52
 
 
 
 
 
 
53
 
54
 
55
  ### Framework versions
generation_config.json CHANGED
@@ -1,9 +1,14 @@
1
  {
2
- "_from_model_config": true,
3
- "bos_token_id": 128000,
4
  "do_sample": true,
5
- "eos_token_id": 128001,
6
- "temperature": 0.6,
7
- "top_p": 0.9,
 
 
 
 
 
 
8
  "transformers_version": "4.46.1"
9
  }
 
1
  {
2
+ "bos_token_id": 151643,
 
3
  "do_sample": true,
4
+ "eos_token_id": [
5
+ 151645,
6
+ 151643
7
+ ],
8
+ "pad_token_id": 151643,
9
+ "repetition_penalty": 1.05,
10
+ "temperature": 0.7,
11
+ "top_k": 20,
12
+ "top_p": 0.8,
13
  "transformers_version": "4.46.1"
14
  }