lesso10's picture
End of training
dc5c983 verified
metadata
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
  - axolotl
  - generated_from_trainer
model-index:
  - name: 5a4bf373-7928-4fea-aea6-2b0160f5d1c9
    results: []

Built with Axolotl

5a4bf373-7928-4fea-aea6-2b0160f5d1c9

This model is a fine-tuned version of unsloth/gemma-2-2b on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3595

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.00021
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 100
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500

Training results

Training Loss Epoch Step Validation Loss
No log 0.0000 1 1.6350
1.3774 0.0008 50 1.4623
1.3825 0.0017 100 1.4445
1.3845 0.0025 150 1.4355
1.2979 0.0033 200 1.4278
1.3492 0.0041 250 1.4065
1.2435 0.0050 300 1.3964
1.2813 0.0058 350 1.3751
1.3393 0.0066 400 1.3661
1.3273 0.0075 450 1.3599
1.2704 0.0083 500 1.3595

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1