TRL supports training LLMs with online DPO (Guo et al., 2024) with a reward model (RM). The idea of online DPO is to generate completions based on prompts and either have an RM or a LLM judge to rank the responses. Then the policy is updated with the ranked responses using the DPO loss.
While Guo et al. (2024) used a LLM judge, in this implementation we just used a RM.
To just run the online DPO script to make sure the trainer can run, you can run the following command to train an online DPO model with a dummy reward model.
python examples/scripts/online_dpo.py \ --dataset_name trl-internal-testing/tldr-preference-sft-trl-style \ --learning_rate 3e-6 \ --output_dir models/minimal/online_dpo \ --per_device_train_batch_size 1 \ --gradient_accumulation_steps 64 \ --total_episodes 30000 \ --model_name_or_path EleutherAI/pythia-14m \ --sft_model_path EleutherAI/pythia-14m \ --reward_model_path EleutherAI/pythia-14m \ --non_eos_penalty \ --stop_token eos \ --response_length 53 \ --sanity_check
The logged metrics are as follows. Here is an example tracked run at Weights and Biases
eps
: Tracks the number of episodes per second.objective/kl
: The mean Kullback-Leibler (KL) divergence between the current policy and reference policy.objective/entropy
: The mean entropy of the policy, indicating the randomness of the actions chosen by the policy.objective/non_score_reward
: The mean reward from non-score-related sources, basically beta * kl.sum(1)
, where beta
is the KL penalty coefficient and kl
is the per-token KL divergence.objective/rlhf_reward
: The mean RLHF reward, which is score - non_score_reward
.objective/scores
: The mean scores returned by the reward model / environment.objective/scores_margin
: The mean score margin (according to the external reward model) between the chosen and rejected completions.rewards/accuracies
: The accuracies of the online DPO’s implicit reward model.rewards/chosen
: The mean reward (according to online DPO’s implicit reward model)of the chosen completions.rewards/rejected
: The mean reward (according to online DPO’s implicit reward model) of the rejected completions.rewards/margins
: The mean reward margin (according to online DPO’s implicit reward model) between the chosen and rejected completions.logps/chosen
: The mean log probabilities of the chosen completions.logps/rejected
: The mean log probabilities of the rejected completions.val/num_eos_tokens
: The number of end-of-sequence (EOS) tokens generated, which can indicate the number of complete responses.lr
: lr: The current learning rate used by the optimizer.episode
: episode: The current global step or episode count in the training process.objective/rlhf_reward
: this is the ultimate objective of the RLHF training. If training works as intended, this metric should keep going up.--per_device_train_batch_size
or increase the --gradient_accumulation_steps
to reduce the memory footprint.accelerate launch --config_file examples/accelerate_configs/deepspeed_zero3.yaml
.--non_eos_penalty --stop_token eos
, which replaces the score of completions that do not end with an EOS token with a static scalar penalty --penalty_reward_value
. This can help the model learn to generate more coherent completions.To help you understand what your model is doing, we periodically log some sample completions from the model. Here is an example of a completion. In an example tracked run at Weights and Biases, it looks like the following, allowing you to see the model’s response at different stages of training. By default we generate --num_sample_generations 10
during training, but you can customize the number of generations.
In the logs the sampled generations look like
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┓
┃ query ┃ model response ┃ score ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━┩
│ SUBREDDIT: r/AskReddit │ I'm in love with a friend, and │ 3.921875 │
│ │ I don't know how to get rid of │ │
│ TITLE: How do you get someone │ those feelings. I'm │ │
│ out of your head? │ desperate.<|endoftext|>[PAD][P… │ │
│ │ │ │
│ POST: Hi, │ │ │
│ I'm 22, and I have been with my │ │ │
│ girlfriend for 5 years now. We │ │ │
│ recently moved together. We've │ │ │
│ always loved each other │ │ │
│ intensely. │ │ │
│ │ │ │
│ Problem, I recently started to │ │ │
│ have feelings for an other │ │ │
│ person (a friend). This person │ │ │
│ has had a boyfriend for now 3 │ │ │
│ years, and has absolutely no │ │ │
│ ideas. Those feelings were so │ │ │
│ strong, it was hard to hide │ │ │
│ them. After 2 months of me │ │ │
│ being distant and really sad, │ │ │
│ my girlfriend forced me to say │ │ │
│ what was bothering me. I'm not │ │ │
│ a good liar, and now she knows. │ │ │
│ │ │ │
│ We decided to give us a week │ │ │
│ alone, I went to my parents. │ │ │
│ │ │ │
│ Now, I'm completely lost. I │ │ │
│ keep on thinking about this │ │ │
│ person, and I hate that. I │ │ │
│ would like for those feelings │ │ │
│ to go away, to leave me alone. │ │ │
│ But I can't. │ │ │
│ │ │ │
│ What do I do? It's been 3 │ │ │
│ months now, and I'm just │ │ │
│ desperate. │ │ │
│ │ │ │
│ TL;DR: │ │ │
├─────────────────────────────────┼─────────────────────────────────┼──────────┤
│ SUBREDDIT: r/pettyrevenge │ My mom woke me up with a loud │ 6.84375 │
│ │ TV. I blasted Gangnam Style on │ │
│ TITLE: So, my mom woke me up │ repeat, with the bass cranked │ │
│ with a loud TV. │ up as high as it could │ │
│ │ go.<|endoftext|>[PAD][PAD][PAD… │ │
│ POST: She was in her living │ │ │
│ room, watching TV. This was at │ │ │
│ about 8:30 in the morning, and │ │ │
│ she was exercising. She turned │ │ │
│ the TV up extra loud to hear it │ │ │
│ over her excercycle, and woke │ │ │
│ me up. I went in there asking │ │ │
│ for her to turn it down. She │ │ │
│ said she didn't have to; I │ │ │
│ explained that I always used │ │ │
│ headphones so she didn't have │ │ │
│ to deal with my noise and that │ │ │
│ she should give me a little │ │ │
│ more respect, given that I paid │ │ │
│ rent at the time. │ │ │
│ │ │ │
│ She disagreed. I went back to │ │ │
│ my room, rather pissed off at │ │ │
│ the lack of equality. I had no │ │ │
│ lock on my door; but I had a │ │ │
│ dresser right next to it, so I │ │ │
│ pulled one of the drawers out │ │ │
│ enough so that it caused the │ │ │
│ door to not be openable. Then, │ │ │
│ I turned my speakers up really │ │ │
│ loud and blasted Gangnam Style │ │ │
│ on repeat, with the bass │ │ │
│ cranked up as high as it could │ │ │
│ go. │ │ │
│ │ │ │
│ If you hate Gangnam Style for │ │ │
│ being overplayed, you will see │ │ │
│ why I chose that particular │ │ │
│ song. I personally don't mind │ │ │
│ it. But here's the thing about │ │ │
│ my bass; it vibrates the walls, │ │ │
│ making one hell of a lot of │ │ │
│ noise. Needless to say, my mom │ │ │
│ was not pleased and shut off │ │ │
│ the internet. But it was oh so │ │ │
│ worth it. │ │ │
│ │ │ │
│ TL;DR: │ │ │
├─────────────────────────────────┼─────────────────────────────────┼──────────┤
Many online implementation details are borrowed from the PPOv2Trainer, which is itself based on the The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization. Here are some additional implementation details:
-1
) via --non_eos_penalty --stop_token eos
, it’s possible that the chosen and rejected completions have the same score. In this case, we will naively select the completion with the lower index and the chosen completion.To validate the online DPO implementation works, we ran experiments on the 1B and 6.9B models. Here are the commands we used to run the experiments. We take the SFT / RM models directly from The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization.
# 1B PPO experiment
accelerate launch --config_file examples/accelerate_configs/deepspeed_zero2.yaml \
examples/scripts/online_dpo.py \
--dataset_name trl-internal-testing/tldr-preference-sft-trl-style \
--learning_rate 3e-6 \
--output_dir models/minimal/online_dpo_tldr \
--per_device_train_batch_size 16 \
--gradient_accumulation_steps 4 \
--local_rollout_forward_batch_size 32 \
--num_ppo_epochs 1 \
--num_mini_batches 1 \
--total_episodes 1000000 \
--model_name_or_path cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr \
--sft_model_path cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr \
--reward_model_path cleanrl/EleutherAI_pythia-1b-deduped__reward__tldr \
--save_strategy no \
--non_eos_penalty \
--stop_token eos \
--beta 0.1 \
--response_length 53 \
--push_to_hub
# 6.9B PPO experiment
accelerate launch --config_file examples/accelerate_configs/deepspeed_zero3.yaml \
examples/scripts/online_dpo.py \
--dataset_name trl-internal-testing/tldr-preference-sft-trl-style \
--learning_rate 3e-6 \
--output_dir models/minimal/online_dpo_tldr_6.9b \
--per_device_train_batch_size 4 \
--gradient_accumulation_steps 16 \
--local_rollout_forward_batch_size 8 \
--num_ppo_epochs 1 \
--num_mini_batches 1 \
--total_episodes 1000000 \
--model_name_or_path EleutherAI/pythia-6.9b-deduped \
--sft_model_path cleanrl/EleutherAI_pythia-6.9b-deduped__sft__tldr \
--reward_model_path cleanrl/EleutherAI_pythia-6.9b-deduped__reward__tldr \
--save_strategy no \
--non_eos_penalty \
--stop_token eos \
--beta 0.1 \
--response_length 53 \
--push_to_hub
1B experiment can be found here:
To evaluate, we use vLLM to load the checkpoints and GPT4 as a judge model to evaluate the generated TL;DR against the reference TL;DR.
#### using GPT4 as a judge
python -i examples/scripts/evals/generate_tldr.py \
--model_name_or_path cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr \
--judge_model gpt-4-0613 \
--output_path examples/scripts/evals/sft_tldr.csv \
--n 1000
# preferred
# response1 790
# response0 210
# Name: count, dtype: int64
python -i examples/scripts/evals/generate_tldr.py \
--model_name_or_path cleanrl/EleutherAI_pythia-6.9b-deduped__sft__tldr \
--judge_model gpt-4-0613 \
--output_path examples/scripts/evals/sft_tldr.csv \
--n 1000
# preferred
# response1 691
# response0 309
# Name: count, dtype: int64
python -i examples/scripts/evals/generate_tldr.py \
--model_name_or_path vwxyzjn/online_dpo_tldr \
--judge_model gpt-4-0613 \
--output_path examples/scripts/evals/online_dpo_tldr.csv \
--n 1000
# preferred
# response0 532
# response1 468
# Name: count, dtype: int64
python -i examples/scripts/evals/generate_tldr.py \
--model_name_or_path vwxyzjn/online_dpo_tldr_6.9b \
--judge_model gpt-4-0613 \
--output_path examples/scripts/evals/online_dpo_tldr_6.9b.csv \
--n 1000
# preferred
# response0 780
# response1 220
# Name: count, dtype: int64
We can then plot the RLHF scaling chart.
import matplotlib.pyplot as plt
data = {
"SFT": [[1e9, 6.9e9], [210 / 1000, 309 / 1000]],
"online DPO": [[1e9, 6.9e9], [532 / 1000, 780 / 1000]],
}
for model, (x, y) in data.items():
plt.scatter(x, y, label=model)
plt.axhline(y=0.5, color="black", linestyle="-.", label="human reference summary")
plt.title("RLHF scaling by model size")
plt.xlabel("Model size")
plt.ylabel("Win rate against reference summaries\n(according to GPT-4-0613)")
plt.xscale("log")
plt.xlim(5e8, 1e10)
plt.legend()
plt.grid(True, which="both", ls="--", c="0.7")
plt.tight_layout()
plt.savefig("plot.png")
The online DPO checkpoint gets increasingly more win rate as we scale up the model sizes. This is a good sign that the online DPO implementation is working as intended.
< > Update on GitHub