Genpro_Llama3-8b / README.md
Lalith16's picture
End of training
b302807 verified
|
raw
history blame
2.11 kB
metadata
license: llama3
library_name: peft
tags:
  - trl
  - sft
  - generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B
model-index:
  - name: Genpro_Llama3-8b
    results: []

Genpro_Llama3-8b

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6683

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
1.3225 0.0634 100 1.3185
1.1249 0.1267 200 1.1062
1.0249 0.1901 300 1.0177
1.025 0.2535 400 0.9535
0.9842 0.3169 500 0.9055
0.9721 0.3802 600 0.8475
0.9047 0.4436 700 0.8295
0.8861 0.5070 800 0.7987
0.7792 0.5703 900 0.7721
0.8329 0.6337 1000 0.7541
0.7882 0.6971 1100 0.7305
0.8286 0.7605 1200 0.7095
0.7418 0.8238 1300 0.6927
0.7594 0.8872 1400 0.6725
0.7717 0.9506 1500 0.6683

Framework versions

  • PEFT 0.11.1
  • Transformers 4.41.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1