lesso06's picture
End of training
765658d verified
metadata
library_name: peft
base_model: katuni4ka/tiny-random-qwen1.5-moe
tags:
  - axolotl
  - generated_from_trainer
model-index:
  - name: 5663e6cc-de8d-4ea7-87d4-be498fce4d0e
    results: []

Built with Axolotl

5663e6cc-de8d-4ea7-87d4-be498fce4d0e

This model is a fine-tuned version of katuni4ka/tiny-random-qwen1.5-moe on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 11.7402

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.000206
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 60
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • training_steps: 9000

Training results

Training Loss Epoch Step Validation Loss
No log 0.0003 1 11.9364
11.7934 0.1516 500 11.7890
11.7818 0.3033 1000 11.7749
11.7769 0.4549 1500 11.7676
11.772 0.6066 2000 11.7620
11.7681 0.7582 2500 11.7579
11.76 0.9098 3000 11.7542
11.7532 1.0615 3500 11.7502
11.774 1.2131 4000 11.7484
11.7583 1.3648 4500 11.7457
11.7423 1.5164 5000 11.7446
11.7536 1.6681 5500 11.7429
11.7444 1.8197 6000 11.7420
11.753 1.9713 6500 11.7410
11.7538 2.1230 7000 11.7406
11.7483 2.2746 7500 11.7404
11.7539 2.4263 8000 11.7401
11.7568 2.5779 8500 11.7402
11.7279 2.7295 9000 11.7402

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1