lesso14's picture
End of training
46b551e verified
metadata
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
  - axolotl
  - generated_from_trainer
model-index:
  - name: 2493e57b-de64-4dd4-b986-1eff20f15fa3
    results: []

Built with Axolotl

2493e57b-de64-4dd4-b986-1eff20f15fa3

This model is a fine-tuned version of Korabbit/llama-2-ko-7b on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5627

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.000214
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 140
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500

Training results

Training Loss Epoch Step Validation Loss
No log 0.0005 1 1.1167
0.5741 0.0258 50 0.6580
0.5458 0.0516 100 0.6708
0.541 0.0774 150 0.6404
0.5495 0.1033 200 0.6214
0.5433 0.1291 250 0.6066
0.5177 0.1549 300 0.5884
0.535 0.1807 350 0.5746
0.4859 0.2065 400 0.5650
0.4975 0.2323 450 0.5632
0.4496 0.2581 500 0.5627

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1