Aivesa commited on
Commit
ee0d6fd
·
verified ·
1 Parent(s): 89ce437

End of training

Browse files
Files changed (1) hide show
  1. README.md +17 -19
README.md CHANGED
@@ -1,14 +1,14 @@
1
  ---
2
  library_name: peft
3
- license: apache-2.0
4
- base_model: JackFram/llama-68m
5
  tags:
6
  - axolotl
7
  - generated_from_trainer
8
  datasets:
9
- - Aivesa/dataset_018784f0-a53d-4511-af7a-e966236fc582
10
  model-index:
11
- - name: 1ecfbdde-71a6-46a4-8ea6-5fb1b3315696
12
  results: []
13
  ---
14
 
@@ -21,18 +21,18 @@ should probably proofread and complete it, then remove this comment. -->
21
  axolotl version: `0.6.0`
22
  ```yaml
23
  adapter: lora
24
- base_model: JackFram/llama-68m
25
  bf16: auto
26
  chat_template: llama3
27
  dataset_prepared_path: /workspace/axolotl/data/prepared
28
  datasets:
29
  - ds_type: json
30
  format: custom
31
- path: Aivesa/dataset_018784f0-a53d-4511-af7a-e966236fc582
32
  type:
33
  field_input: context
34
- field_instruction: source
35
- field_output: reference_original
36
  system_format: '{system}'
37
  system_prompt: ''
38
  debug: null
@@ -48,7 +48,7 @@ fsdp_config: null
48
  gradient_accumulation_steps: 4
49
  gradient_checkpointing: false
50
  group_by_length: false
51
- hub_model_id: Aivesa/1ecfbdde-71a6-46a4-8ea6-5fb1b3315696
52
  hub_private_repo: true
53
  hub_repo: null
54
  hub_strategy: checkpoint
@@ -79,8 +79,6 @@ sample_packing: false
79
  save_safetensors: true
80
  saves_per_epoch: 4
81
  sequence_len: 512
82
- special_tokens:
83
- pad_token: </s>
84
  strict: false
85
  tf32: false
86
  tokenizer_type: AutoTokenizer
@@ -90,10 +88,10 @@ use_accelerate: true
90
  val_set_size: 0.05
91
  wandb_entity: null
92
  wandb_mode: online
93
- wandb_name: 018784f0-a53d-4511-af7a-e966236fc582
94
  wandb_project: Gradients-On-Demand
95
  wandb_run: your_name
96
- wandb_runid: 018784f0-a53d-4511-af7a-e966236fc582
97
  warmup_steps: 10
98
  weight_decay: 0.0
99
  xformers_attention: null
@@ -102,11 +100,11 @@ xformers_attention: null
102
 
103
  </details><br>
104
 
105
- # 1ecfbdde-71a6-46a4-8ea6-5fb1b3315696
106
 
107
- This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the Aivesa/dataset_018784f0-a53d-4511-af7a-e966236fc582 dataset.
108
  It achieves the following results on the evaluation set:
109
- - Loss: 3.0933
110
 
111
  ## Model description
112
 
@@ -140,9 +138,9 @@ The following hyperparameters were used during training:
140
 
141
  | Training Loss | Epoch | Step | Validation Loss |
142
  |:-------------:|:------:|:----:|:---------------:|
143
- | 3.1687 | 0.0149 | 3 | 3.1479 |
144
- | 2.8454 | 0.0299 | 6 | 3.1350 |
145
- | 3.1911 | 0.0448 | 9 | 3.0933 |
146
 
147
 
148
  ### Framework versions
 
1
  ---
2
  library_name: peft
3
+ license: other
4
+ base_model: facebook/opt-1.3b
5
  tags:
6
  - axolotl
7
  - generated_from_trainer
8
  datasets:
9
+ - Aivesa/dataset_46dd44b3-e4e4-4c44-8605-e8e8b6dd956e
10
  model-index:
11
+ - name: a05b476c-463b-4946-89c4-d93aca81d7e7
12
  results: []
13
  ---
14
 
 
21
  axolotl version: `0.6.0`
22
  ```yaml
23
  adapter: lora
24
+ base_model: facebook/opt-1.3b
25
  bf16: auto
26
  chat_template: llama3
27
  dataset_prepared_path: /workspace/axolotl/data/prepared
28
  datasets:
29
  - ds_type: json
30
  format: custom
31
+ path: Aivesa/dataset_46dd44b3-e4e4-4c44-8605-e8e8b6dd956e
32
  type:
33
  field_input: context
34
+ field_instruction: question
35
+ field_output: answers
36
  system_format: '{system}'
37
  system_prompt: ''
38
  debug: null
 
48
  gradient_accumulation_steps: 4
49
  gradient_checkpointing: false
50
  group_by_length: false
51
+ hub_model_id: Aivesa/a05b476c-463b-4946-89c4-d93aca81d7e7
52
  hub_private_repo: true
53
  hub_repo: null
54
  hub_strategy: checkpoint
 
79
  save_safetensors: true
80
  saves_per_epoch: 4
81
  sequence_len: 512
 
 
82
  strict: false
83
  tf32: false
84
  tokenizer_type: AutoTokenizer
 
88
  val_set_size: 0.05
89
  wandb_entity: null
90
  wandb_mode: online
91
+ wandb_name: 46dd44b3-e4e4-4c44-8605-e8e8b6dd956e
92
  wandb_project: Gradients-On-Demand
93
  wandb_run: your_name
94
+ wandb_runid: 46dd44b3-e4e4-4c44-8605-e8e8b6dd956e
95
  warmup_steps: 10
96
  weight_decay: 0.0
97
  xformers_attention: null
 
100
 
101
  </details><br>
102
 
103
+ # a05b476c-463b-4946-89c4-d93aca81d7e7
104
 
105
+ This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the Aivesa/dataset_46dd44b3-e4e4-4c44-8605-e8e8b6dd956e dataset.
106
  It achieves the following results on the evaluation set:
107
+ - Loss: 2.6826
108
 
109
  ## Model description
110
 
 
138
 
139
  | Training Loss | Epoch | Step | Validation Loss |
140
  |:-------------:|:------:|:----:|:---------------:|
141
+ | 15.8191 | 0.0003 | 3 | 4.1197 |
142
+ | 17.6309 | 0.0005 | 6 | 3.6521 |
143
+ | 11.7024 | 0.0008 | 9 | 2.6826 |
144
 
145
 
146
  ### Framework versions