Built with Axolotl

d2a21a7d-9af7-48da-b2c8-47da10abd770

This model is a fine-tuned version of EleutherAI/gpt-neo-125m on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.6581

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 80
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500

Training results

Training Loss Epoch Step Validation Loss
No log 0.0000 1 3.1069
12.4427 0.0014 50 2.9512
11.3815 0.0027 100 2.7796
10.8345 0.0041 150 2.7294
10.5576 0.0054 200 2.6998
10.6862 0.0068 250 2.6820
11.1711 0.0081 300 2.6700
10.8965 0.0095 350 2.6634
10.6425 0.0108 400 2.6595
10.8171 0.0122 450 2.6582
10.5469 0.0135 500 2.6581

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
10
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for lesso08/d2a21a7d-9af7-48da-b2c8-47da10abd770

Adapter
(159)
this model