brettbbb commited on
Commit
8608d8e
·
1 Parent(s): c72883a

End of training

Browse files
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ base_model: lmsys/vicuna-7b-v1.5
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: arc_cot_256
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # arc_cot_256
15
+
16
+ This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on an unknown dataset.
17
+
18
+ ## Model description
19
+
20
+ More information needed
21
+
22
+ ## Intended uses & limitations
23
+
24
+ More information needed
25
+
26
+ ## Training and evaluation data
27
+
28
+ More information needed
29
+
30
+ ## Training procedure
31
+
32
+ ### Training hyperparameters
33
+
34
+ The following hyperparameters were used during training:
35
+ - learning_rate: 0.0001
36
+ - train_batch_size: 4
37
+ - eval_batch_size: 8
38
+ - seed: 42
39
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
40
+ - lr_scheduler_type: linear
41
+ - lr_scheduler_warmup_steps: 5
42
+ - num_epochs: 20
43
+ - mixed_precision_training: Native AMP
44
+
45
+ ### Training results
46
+
47
+
48
+
49
+ ### Framework versions
50
+
51
+ - Transformers 4.36.0.dev0
52
+ - Pytorch 2.1.0+cu121
53
+ - Datasets 2.13.1
54
+ - Tokenizers 0.14.1
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3da5f92ae9e7e44e98f7ee65c101aa8af277486a5ff37d149b7673471798cb0b
3
+ size 160069834
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d24da3ee9fa6b6c026388a2f9109d66b49ec9fe3c2b77c2fdb7aa247ead34e45
3
  size 159967880
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4b8464bd8cc2d8c1df2f59cb3cb31a16ffb9f7bf02e03b7c0483d3cf94339d1
3
  size 159967880
all_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 20.0,
3
+ "train_loss": 0.21532799505366712,
4
+ "train_runtime": 2152.3369,
5
+ "train_samples_per_second": 2.379,
6
+ "train_steps_per_second": 0.595
7
+ }
train_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 20.0,
3
+ "train_loss": 0.21532799505366712,
4
+ "train_runtime": 2152.3369,
5
+ "train_samples_per_second": 2.379,
6
+ "train_steps_per_second": 0.595
7
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff