|
--- |
|
license: other |
|
license_name: mrl |
|
license_link: https://mistral.ai/licenses/MRL-0.1.md |
|
library_name: transformers |
|
base_model: |
|
- mistralai/Mistral-Small-Instruct-2409 |
|
datasets: |
|
- jondurbin/gutenberg-dpo-v0.1 |
|
- nbeerbower/gutenberg2-dpo |
|
--- |
|
|
|
# Mistral-Small-Drummer-22B |
|
|
|
[mistralai/Mistral-Small-Instruct-2409](https://huggingface.co/mistralai/Mistral-Small-Instruct-2409) finetuned on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1) and [nbeerbower/gutenberg2-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg2-dpo). |
|
|
|
### Method |
|
|
|
ORPO tuned with 2xA40 on RunPod for 1 epoch. |
|
|
|
``` |
|
learning_rate=4e-6, |
|
lr_scheduler_type="linear", |
|
beta=0.1, |
|
per_device_train_batch_size=4, |
|
per_device_eval_batch_size=4, |
|
gradient_accumulation_steps=8, |
|
optim="paged_adamw_8bit", |
|
num_train_epochs=1, |
|
``` |
|
|
|
Dataset was prepared using Mistral-Small Instruct format. |
|
|
|
[Fine-tune Llama 3 with ORPO](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) |