SystemAdmin123 commited on
Commit
c55b331
·
verified ·
1 Parent(s): 6ca519e

End of training

Browse files
Files changed (2) hide show
  1. README.md +130 -0
  2. generation_config.json +8 -0
README.md ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: echarlaix/tiny-random-PhiForCausalLM
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ datasets:
9
+ - argilla/databricks-dolly-15k-curated-en
10
+ model-index:
11
+ - name: tiny-random-PhiForCausalLM
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
19
+ <details><summary>See axolotl config</summary>
20
+
21
+ axolotl version: `0.6.0`
22
+ ```yaml
23
+ base_model: echarlaix/tiny-random-PhiForCausalLM
24
+ batch_size: 128
25
+ bf16: true
26
+ chat_template: tokenizer_default_fallback_alpaca
27
+ datasets:
28
+ - format: custom
29
+ path: argilla/databricks-dolly-15k-curated-en
30
+ type:
31
+ field_input: original-instruction
32
+ field_instruction: original-instruction
33
+ field_output: original-response
34
+ format: '{instruction} {input}'
35
+ no_input_format: '{instruction}'
36
+ system_format: '{system}'
37
+ system_prompt: ''
38
+ device_map: auto
39
+ eval_sample_packing: false
40
+ eval_steps: 200
41
+ flash_attention: true
42
+ gradient_checkpointing: true
43
+ group_by_length: true
44
+ hub_model_id: SystemAdmin123/tiny-random-PhiForCausalLM
45
+ hub_strategy: checkpoint
46
+ learning_rate: 0.0002
47
+ logging_steps: 10
48
+ lr_scheduler: cosine
49
+ max_steps: 10000
50
+ micro_batch_size: 32
51
+ model_type: AutoModelForCausalLM
52
+ num_epochs: 100
53
+ optimizer: adamw_bnb_8bit
54
+ output_dir: /root/.sn56/axolotl/tmp/tiny-random-PhiForCausalLM
55
+ pad_to_sequence_len: true
56
+ resize_token_embeddings_to_32x: false
57
+ sample_packing: true
58
+ save_steps: 200
59
+ save_total_limit: 1
60
+ sequence_len: 2048
61
+ special_tokens:
62
+ pad_token: <|endoftext|>
63
+ tokenizer_type: GPTNeoXTokenizerFast
64
+ torch_dtype: bf16
65
+ training_args_kwargs:
66
+ hub_private_repo: true
67
+ trust_remote_code: true
68
+ val_set_size: 0.1
69
+ wandb_entity: ''
70
+ wandb_mode: online
71
+ wandb_name: echarlaix/tiny-random-PhiForCausalLM-argilla/databricks-dolly-15k-curated-en
72
+ wandb_project: Gradients-On-Demand
73
+ wandb_run: your_name
74
+ wandb_runid: default
75
+ warmup_ratio: 0.05
76
+
77
+ ```
78
+
79
+ </details><br>
80
+
81
+ # tiny-random-PhiForCausalLM
82
+
83
+ This model is a fine-tuned version of [echarlaix/tiny-random-PhiForCausalLM](https://huggingface.co/echarlaix/tiny-random-PhiForCausalLM) on the argilla/databricks-dolly-15k-curated-en dataset.
84
+ It achieves the following results on the evaluation set:
85
+ - Loss: 6.3360
86
+
87
+ ## Model description
88
+
89
+ More information needed
90
+
91
+ ## Intended uses & limitations
92
+
93
+ More information needed
94
+
95
+ ## Training and evaluation data
96
+
97
+ More information needed
98
+
99
+ ## Training procedure
100
+
101
+ ### Training hyperparameters
102
+
103
+ The following hyperparameters were used during training:
104
+ - learning_rate: 0.0002
105
+ - train_batch_size: 32
106
+ - eval_batch_size: 32
107
+ - seed: 42
108
+ - distributed_type: multi-GPU
109
+ - num_devices: 4
110
+ - total_train_batch_size: 128
111
+ - total_eval_batch_size: 128
112
+ - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
113
+ - lr_scheduler_type: cosine
114
+ - lr_scheduler_warmup_steps: 10
115
+ - training_steps: 200
116
+
117
+ ### Training results
118
+
119
+ | Training Loss | Epoch | Step | Validation Loss |
120
+ |:-------------:|:-----:|:----:|:---------------:|
121
+ | No log | 0.1 | 1 | 6.9373 |
122
+ | 6.3375 | 20.0 | 200 | 6.3360 |
123
+
124
+
125
+ ### Framework versions
126
+
127
+ - Transformers 4.48.1
128
+ - Pytorch 2.5.1+cu124
129
+ - Datasets 3.2.0
130
+ - Tokenizers 0.21.0
generation_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "do_sample": true,
5
+ "eos_token_id": 0,
6
+ "pad_token_id": 0,
7
+ "transformers_version": "4.48.1"
8
+ }