max_steps = 200
learning_rate = 1e-6
warmup_ratio = 0.1
dpo_beta = 0.4
use_rslora = True
use_loftq = False
lora_rank = 128
lora_alpha = 256
load_separate_reference_model = False
optim = "paged_lion_32bit"

Downloads last month
3
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for andysalerno/openchat-nectar-0.14

Finetuned
(30)
this model

Dataset used to train andysalerno/openchat-nectar-0.14