Boffl commited on
Commit
7590f5b
·
verified ·
1 Parent(s): 4d62b7f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -7,14 +7,14 @@ tags:
7
  - lora
8
  - generated_from_trainer
9
  model-index:
10
- - name: llama3.1_lora_pretrain_bible
11
  results: []
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
- # llama3.1_lora_pretrain_bible
18
 
19
  This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-bnb-4bit](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-bnb-4bit) on the pretrain_bible dataset.
20
 
@@ -44,7 +44,7 @@ The following hyperparameters were used during training:
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: cosine
46
  - lr_scheduler_warmup_ratio: 0.1
47
- - num_epochs: 10.0
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
@@ -53,8 +53,8 @@ The following hyperparameters were used during training:
53
 
54
  ### Framework versions
55
 
56
- - PEFT 0.13.2
57
- - Transformers 4.44.2
58
  - Pytorch 2.3.1+cu121
59
- - Datasets 3.0.2
60
- - Tokenizers 0.19.1
 
7
  - lora
8
  - generated_from_trainer
9
  model-index:
10
+ - name: llama3.1_8B_pretrain_bible
11
  results: []
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # llama3.1_8B_pretrain_bible
18
 
19
  This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-bnb-4bit](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-bnb-4bit) on the pretrain_bible dataset.
20
 
 
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: cosine
46
  - lr_scheduler_warmup_ratio: 0.1
47
+ - num_epochs: 3.0
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
 
53
 
54
  ### Framework versions
55
 
56
+ - PEFT 0.12.0
57
+ - Transformers 4.45.2
58
  - Pytorch 2.3.1+cu121
59
+ - Datasets 2.21.0
60
+ - Tokenizers 0.20.1