Built with Axolotl

b4eecd4f-1bd8-41bf-81ec-baf237aa16d7

This model is a fine-tuned version of katuni4ka/tiny-random-qwen1.5-moe on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 11.6879

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.000201
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 10
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500

Training results

Training Loss Epoch Step Validation Loss
No log 0.0002 1 11.9445
11.8991 0.0100 50 11.8800
11.8379 0.0200 100 11.8417
11.7772 0.0299 150 11.7982
11.6699 0.0399 200 11.7403
11.5849 0.0499 250 11.7134
11.5688 0.0599 300 11.6989
11.5485 0.0699 350 11.6922
11.5111 0.0798 400 11.6887
11.488 0.0898 450 11.6880
11.5079 0.0998 500 11.6879

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
10
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for lesso01/b4eecd4f-1bd8-41bf-81ec-baf237aa16d7

Adapter
(244)
this model