tamewild commited on
Commit
b80fdb6
·
verified ·
1 Parent(s): 2952746

End of training

Browse files
Files changed (2) hide show
  1. README.md +160 -0
  2. generation_config.json +8 -0
README.md ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: unsloth/phi-4
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ datasets:
9
+ - tamewild/y1_sft_split
10
+ model-index:
11
+ - name: 14b_v1_fft
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
19
+ <details><summary>See axolotl config</summary>
20
+
21
+ axolotl version: `0.7.0`
22
+ ```yaml
23
+ base_model: unsloth/phi-4
24
+
25
+ load_in_8bit: false
26
+ load_in_4bit: false
27
+
28
+ bf16: auto
29
+ fp16:
30
+ tf32: false
31
+
32
+ datasets:
33
+ - path: tamewild/y1_sft_split
34
+ split: train
35
+ type: chat_template
36
+ field_messages: conversation
37
+
38
+ shuffle_merged_datasets: true
39
+
40
+ test_datasets:
41
+ - path: tamewild/y1_sft_split
42
+ split: validation
43
+ type: chat_template
44
+ field_messages: conversation
45
+
46
+ dataset_prepared_path: workspace/dataset_prepared
47
+
48
+ hub_model_id: tamewild/14b_v1_fft
49
+
50
+ hf_use_auth_token: true
51
+
52
+ sequence_len: 9000
53
+ pad_to_sequence_len: true
54
+ sample_packing: true
55
+ eval_sample_packing: true # disable if we get errors
56
+
57
+ # wandb configuration if you're using it
58
+ # Make sure your `WANDB_API_KEY` environment variable is set (recommended) or you login to wandb with `wandb login`.
59
+ wandb_mode: # "offline" to save run metadata locally and not sync to the server, "disabled" to turn off wandb
60
+ wandb_project: axolotl # Your wandb project name
61
+ wandb_entity: # A wandb Team name if using a Team
62
+ wandb_watch:
63
+ wandb_name: # Set the name of your wandb run
64
+ wandb_run_id: # Set the ID of your wandb run
65
+ wandb_log_model: # "checkpoint" to log model to wandb Artifacts every `save_steps` or "end" to log only at the end of training
66
+
67
+ output_dir: /workspace/tuned
68
+
69
+ torch_compile: auto
70
+
71
+ gradient_accumulation_steps: 2
72
+ micro_batch_size: 8
73
+ eval_batch_size: 8
74
+ num_epochs: 1
75
+ warmup_ratio: 0.01
76
+ learning_rate: 4.5e-5
77
+ logging_steps: 1
78
+ eval_steps: # Leave empty to eval at each epoch, integer for every N steps. float for fraction of total steps
79
+ evals_per_epoch: 4 # number of times per epoch to run evals, mutually exclusive with eval_steps
80
+ save_steps: # Leave empty to save at each epoch, integer for every N steps. float for fraction of total steps
81
+ saves_per_epoch: 4 # number of times per epoch to save a checkpoint, mutually exclusive with save_steps
82
+ save_total_limit: 1 # Checkpoints saved at a time
83
+
84
+ include_tokens_per_second: true
85
+
86
+ train_on_inputs: false
87
+ group_by_length: false
88
+
89
+ gradient_checkpointing: true
90
+ gradient_checkpointing_kwargs:
91
+ use_reentrant: false
92
+
93
+ lr_scheduler: cosine
94
+ lr_scheduler_kwargs:
95
+ cosine_min_lr_ratio: 0.025
96
+
97
+ optimizer: paged_adamw_8bit
98
+
99
+ weight_decay: 0.01
100
+
101
+ xformers_attention:
102
+ flash_attention: true
103
+
104
+ seed: 1234
105
+
106
+ strict: false
107
+ ```
108
+
109
+ </details><br>
110
+
111
+ # 14b_v1_fft
112
+
113
+ This model is a fine-tuned version of [unsloth/phi-4](https://huggingface.co/unsloth/phi-4) on the tamewild/y1_sft_split dataset.
114
+ It achieves the following results on the evaluation set:
115
+ - Loss: 0.3468
116
+
117
+ ## Model description
118
+
119
+ More information needed
120
+
121
+ ## Intended uses & limitations
122
+
123
+ More information needed
124
+
125
+ ## Training and evaluation data
126
+
127
+ More information needed
128
+
129
+ ## Training procedure
130
+
131
+ ### Training hyperparameters
132
+
133
+ The following hyperparameters were used during training:
134
+ - learning_rate: 4.5e-05
135
+ - train_batch_size: 8
136
+ - eval_batch_size: 8
137
+ - seed: 1234
138
+ - gradient_accumulation_steps: 2
139
+ - total_train_batch_size: 16
140
+ - optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
141
+ - lr_scheduler_type: cosine
142
+ - lr_scheduler_warmup_steps: 3
143
+ - num_epochs: 1.0
144
+
145
+ ### Training results
146
+
147
+ | Training Loss | Epoch | Step | Validation Loss |
148
+ |:-------------:|:------:|:----:|:---------------:|
149
+ | 0.7528 | 0.0033 | 1 | 0.7171 |
150
+ | 0.3488 | 0.2504 | 77 | 0.3663 |
151
+ | 0.3704 | 0.5008 | 154 | 0.3525 |
152
+ | 0.3338 | 0.7512 | 231 | 0.3468 |
153
+
154
+
155
+ ### Framework versions
156
+
157
+ - Transformers 4.48.3
158
+ - Pytorch 2.5.1+gitf929e0d
159
+ - Datasets 3.2.0
160
+ - Tokenizers 0.21.0
generation_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 100257,
4
+ "do_sample": true,
5
+ "eos_token_id": 100265,
6
+ "pad_token_id": 100351,
7
+ "transformers_version": "4.48.3"
8
+ }