metadata
base_model: defog/llama-3-sqlcoder-8b
library_name: peft
license: cc-by-sa-4.0
tags:
- trl
- orpo
- generated_from_trainer
model-index:
- name: results
results: []
results
This model is a fine-tuned version of defog/llama-3-sqlcoder-8b on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1487
- Rewards/chosen: -0.0059
- Rewards/rejected: -0.0256
- Rewards/accuracies: 0.9037
- Rewards/margins: 0.0197
- Logps/rejected: -0.2555
- Logps/chosen: -0.0585
- Logits/rejected: 0.2408
- Logits/chosen: 0.2329
- Nll Loss: 0.1244
- Log Odds Ratio: -0.2414
- Log Odds Chosen: 1.5632
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss | Log Odds Ratio | Log Odds Chosen |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.6997 | 0.4 | 144 | 0.6765 | -0.0517 | -0.0564 | 0.8447 | 0.0047 | -0.5638 | -0.5169 | -0.1910 | -0.1941 | 0.6134 | -0.6250 | 0.1486 |
0.206 | 0.8 | 288 | 0.1943 | -0.0081 | -0.0186 | 0.8975 | 0.0105 | -0.1858 | -0.0809 | 0.0507 | 0.0486 | 0.1574 | -0.3672 | 0.9122 |
0.1531 | 1.2 | 432 | 0.1592 | -0.0064 | -0.0245 | 0.9068 | 0.0182 | -0.2452 | -0.0637 | 0.2239 | 0.2196 | 0.1331 | -0.2599 | 1.4386 |
0.1424 | 1.6 | 576 | 0.1510 | -0.0060 | -0.0257 | 0.8975 | 0.0197 | -0.2569 | -0.0597 | 0.2172 | 0.2093 | 0.1265 | -0.2436 | 1.5494 |
0.1291 | 2.0 | 720 | 0.1487 | -0.0059 | -0.0256 | 0.9037 | 0.0197 | -0.2555 | -0.0585 | 0.2408 | 0.2329 | 0.1244 | -0.2414 | 1.5632 |
Framework versions
- PEFT 0.11.1
- Transformers 4.42.1
- Pytorch 2.3.1+cu118
- Datasets 2.20.0
- Tokenizers 0.19.1