YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

mpt_1000_STEPS_1e6_rate_01_beta_DPO - bnb 8bits

Original model description:

license: apache-2.0 base_model: mosaicml/mpt-7b-instruct tags: - trl - dpo - generated_from_trainer model-index: - name: mpt_1000_STEPS_1e6_rate_01_beta_DPO results: []

mpt_1000_STEPS_1e6_rate_01_beta_DPO

This model is a fine-tuned version of mosaicml/mpt-7b-instruct on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6555
  • Rewards/chosen: -0.9911
  • Rewards/rejected: -1.1284
  • Rewards/accuracies: 0.6220
  • Rewards/margins: 0.1372
  • Logps/rejected: -32.8413
  • Logps/chosen: -30.7037
  • Logits/rejected: 12.5582
  • Logits/chosen: 12.5620

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 2
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • training_steps: 1000

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.7012 0.1 100 0.6878 0.0402 0.0262 0.5516 0.0140 -21.2953 -20.3903 14.1969 14.1998
0.6605 0.2 200 0.6893 0.1209 0.0818 0.5670 0.0391 -20.7398 -19.5837 13.0519 13.0548
0.657 0.29 300 0.6715 -0.4737 -0.5524 0.5758 0.0787 -27.0816 -25.5295 13.1844 13.1876
0.6934 0.39 400 0.6676 -0.8625 -0.9556 0.5934 0.0932 -31.1138 -29.4168 12.8462 12.8498
0.6891 0.49 500 0.6641 -1.0231 -1.1288 0.6088 0.1057 -32.8455 -31.0235 12.6874 12.6909
0.6492 0.59 600 0.6564 -0.9706 -1.0997 0.6462 0.1291 -32.5548 -30.4985 12.7748 12.7786
0.6512 0.68 700 0.6569 -0.9892 -1.1224 0.6220 0.1332 -32.7819 -30.6846 12.6401 12.6438
0.6687 0.78 800 0.6556 -0.9937 -1.1300 0.6330 0.1363 -32.8571 -30.7290 12.5528 12.5566
0.6668 0.88 900 0.6552 -0.9899 -1.1276 0.6308 0.1376 -32.8330 -30.6916 12.5557 12.5594
0.5867 0.98 1000 0.6555 -0.9911 -1.1284 0.6220 0.1372 -32.8413 -30.7037 12.5582 12.5620

Framework versions

  • Transformers 4.39.1
  • Pytorch 2.0.0+cu117
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
2
Safetensors
Model size
6.65B params
Tensor type
F32
FP16
I8
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.