Reward Modeling

TRL supports custom reward modeling for anyone to perform reward modeling on their dataset and model.

Expected dataset format

The RewardTrainer expects a very specific format for the dataset since the model will be trained on pairs of examples to predict which of the two is preferred. We usually preprocess the dataset to have chosen and rejected examples using Hugging Face’s chat template. Here are some examples:

The dataset format roughly looks like the following:

The script to create these datasets can be found at https://github.com/huggingface/trl/tree/main/examples/datasets

Get started

Here is a command to train a simple reward model on the sentiment dataset taken from https://arxiv.org/abs/1909.08593 using the pythia-1b-deduped model. The training should take for a few minutes on a single GPU.

python examples/scripts/rm/rm.py \
    --dataset_name trl-internal-testing/sentiment-trl-style \
    --dataset_train_split train \
    --dataset_eval_split test \
    --model_name_or_path EleutherAI/pythia-1b-deduped \
    --chat_template simple_concat_with_space \
    --learning_rate 3e-6 \
    --per_device_train_batch_size 32 \
    --per_device_eval_batch_size 32 \
    --gradient_accumulation_steps 1 \
    --logging_steps 1 \
    --eval_strategy steps \
    --max_token_length 1024 \
    --max_prompt_token_lenth 1024 \
    --remove_unused_columns False \
    --num_train_epochs 1 \
    --eval_steps=100 \
    --output_dir models/minimal/rm_sentiment_1b \
    --push_to_hub

How does it work?

You can use the RewardTrainer in the same way as the Trainer class from 🤗 Transformers. You should pass an AutoModelForSequenceClassification model to the RewardTrainer, along with a RewardConfig which configures the hyperparameters of the training.

Tokenization

Ultimately, the RewardTrainer takes tokenized data as inputs: the post-processed dataset object should contain the following entries

To make data processing easier, we typically deal with chat-style dataset. For example, a chosen or rejected chat message would look like the following chat variable. We can set the tokenizer.chat_template to a template string that will be used to tokenize the chat, then call tokenizer.apply_chat_template to tokenize the chat, like demonstrated below:

from transformers import AutoTokenizer
chat = [
    {"content": "How can I store food if I don't have a pantry?", "role": "user"},
    {
        "content": "You could store the food in a refrigerator, the top cupboards in your kitchen, the freezer, or even in a hole in the ground.",
        "role": "assistant",
    },
]
tokenizer = AutoTokenizer.from_pretrained("gpt2")
tokenizer.chat_template = "{% for message in messages %}{{'\n\n' if not loop.first else ''}}{{message['role']|capitalize + ': ' +message['content']}}{% endfor %}{{eos_token}}"
print(tokenizer.apply_chat_template(chat, tokenize=False))
# User: How can I store food if I don't have a pantry?
#
# Assistant: You could store the food in a refrigerator, the top cupboards in your kitchen, the freezer, or even in a hole in the ground.<|endoftext|>

To make sure the tokenization process is as transparent as possible. We provide a DatasetProcessor class that can be used to tokenize the dataset and visualize the tokenization process. Here is an example of how to use it:

from transformers import AutoTokenizer
from datasets import load_dataset
from trl.dataset_processor import (
    CHAT_TEMPLATES,
    INPUT_IDS_CHOSEN_KEY,
    DatasetConfig,
    PreferenceDatasetProcessor,
    visualize_token,
)
dataset_config = DatasetConfig(
    dataset_name="trl-internal-testing/sentiment-trl-style",
    chat_template="simple_chat",
    max_token_length=1024,
    max_prompt_token_lenth=1024,
)
tokenizer = AutoTokenizer.from_pretrained("gpt2")
tokenizer.chat_template = CHAT_TEMPLATES["simple_chat"]
dataset = load_dataset(dataset_config.dataset_name)
dataset_processor = PreferenceDatasetProcessor(tokenizer=tokenizer, config=dataset_config)
dataset_processor.sanity_check_(dataset)
dataset = dataset_processor.tokenize(dataset)
dataset = dataset_processor.filter(dataset)
dataset_processor.get_token_length_visualization(dataset, save_path="tmp.png")
train_dataset = dataset[dataset_config.dataset_train_split]
eval_dataset = dataset[dataset_config.dataset_eval_split]
visualize_token(train_dataset[0][INPUT_IDS_CHOSEN_KEY], tokenizer)

The visualize_token will output the following colored tokens:

The dataset_processor.get_token_length_visualization will output the visualization on the token length for the chosen, rejected and prompt in tmp.png.

Explanation of logged metrics

The logged metrics are as follows. Here is an example tracked run at Weights and Biases:

What is my model doing exactly?

To help you understand what your model is doing, we periodically log some the produced reward logits of the trained model. Here is an example of a completion. In an example tracked run at Weights and Biases, it looks like the following, allowing you to see the model’s reward logits at different stages of training. You can tune the frequency of the logging with the --eval_strategy steps and --eval_steps=100 options.

Cookbook

Benchmark experiments

To validate the RM implementation works, we ran some experimetns on TRL-style preference datasets.

python examples/scripts/rm/rm.py \
    --dataset_name trl-internal-testing/sentiment-trl-style \
    --dataset_train_split train \
    --dataset_eval_split test \
    --model_name_or_path EleutherAI/pythia-1b-deduped \
    --chat_template simple_concat_with_space \
    --learning_rate 3e-6 \
    --per_device_train_batch_size 32 \
    --per_device_eval_batch_size 32 \
    --gradient_accumulation_steps 1 \
    --logging_steps 1 \
    --eval_strategy steps \
    --max_token_length 1024 \
    --max_prompt_token_lenth 1024 \
    --remove_unused_columns False \
    --num_train_epochs 1 \
    --eval_steps=100 \
    --output_dir models/rm/rm_sentiment_1b \
    --push_to_hub \
    --hub_model_id trl-internal-testing/rm_sentiment_1b

# plot
# pip install openrlbenchmark==0.2.1a5
python -m openrlbenchmark.rlops_multi_metrics \
    --filters '?we=huggingface&wpn=trl&xaxis=train/epoch&ceik=output_dir&cen=output_dir&metrics=train/loss&metrics=eval/accuracy&metrics=eval/loss&metrics=train/grad_norm&metrics=train/learning_rate&metrics=eval/runtime&metrics=eval/samples_per_second&metrics=eval/steps_per_second' \
        "models/rm/rm_sentiment_1b?tag=pr-1646" \
    --env-ids models/rm/rm_sentiment_1b \
    --pc.ncols 4 \
    --pc.ncols-legend 1 \
    --pc.xlabel "Epoch" \
    --output-filename benchmark/trl/pr-1646/rm_sentiment_1b \
    --scan-history
python examples/scripts/rm/rm.py \
    --dataset_name trl-internal-testing/descriptiveness-trl-style \
    --dataset_train_split train \
    --dataset_eval_split test \
    --model_name_or_path EleutherAI/pythia-1b-deduped \
    --chat_template simple_concat_with_space \
    --learning_rate 3e-6 \
    --per_device_train_batch_size 32 \
    --per_device_eval_batch_size 32 \
    --gradient_accumulation_steps 1 \
    --logging_steps 1 \
    --eval_strategy steps \
    --max_token_length 1024 \
    --max_prompt_token_lenth 1024 \
    --remove_unused_columns False \
    --num_train_epochs 1 \
    --eval_steps=100 \
    --output_dir models/rm/rm_descriptiveness_1b \
    --push_to_hub \
    --hub_model_id trl-internal-testing/rm_descriptiveness_1b

# plot
# pip install openrlbenchmark==0.2.1a5
python -m openrlbenchmark.rlops_multi_metrics \
    --filters '?we=huggingface&wpn=trl&xaxis=train/epoch&ceik=output_dir&cen=output_dir&metrics=train/loss&metrics=eval/accuracy&metrics=eval/loss&metrics=train/grad_norm&metrics=train/learning_rate&metrics=eval/runtime&metrics=eval/samples_per_second&metrics=eval/steps_per_second' \
        "models/rm/rm_descriptiveness_1b?tag=pr-1646" \
    --env-ids models/rm/rm_descriptiveness_1b \
    --pc.ncols 4 \
    --pc.ncols-legend 1 \
    --pc.xlabel "Epoch" \
    --output-filename benchmark/trl/pr-1646/rm_descriptiveness_1b \
    --scan-history
accelerate launch --config_file examples/accelerate_configs/deepspeed_zero2.yaml \
    examples/scripts/rm/rm.py \
    --dataset_name trl-internal-testing/hh-rlhf-trl-style \
    --dataset_train_split train \
    --dataset_eval_split test \
    --model_name_or_path EleutherAI/pythia-1b-deduped \
    --chat_template simple_chat \
    --learning_rate 3e-6 \
    --per_device_train_batch_size 8 \
    --per_device_eval_batch_size 8 \
    --gradient_accumulation_steps 4 \
    --logging_steps 1 \
    --eval_strategy steps \
    --max_token_length 1280 \
    --max_prompt_token_lenth 1024 \
    --remove_unused_columns False \
    --num_train_epochs 1 \
    --eval_steps=300 \
    --bf16 \
    --output_dir models/rm/rm_hh_1b \
    --push_to_hub \
    --hub_model_id trl-internal-testing/rm_hh_1b

# plot
# pip install openrlbenchmark==0.2.1a5
python -m openrlbenchmark.rlops_multi_metrics \
    --filters '?we=huggingface&wpn=trl&xaxis=train/epoch&ceik=output_dir&cen=output_dir&metrics=train/loss&metrics=eval/accuracy&metrics=eval/loss&metrics=train/grad_norm&metrics=train/learning_rate&metrics=eval/runtime&metrics=eval/samples_per_second&metrics=eval/steps_per_second' \
        "models/rm/rm_hh_1b?tag=pr-1646" \
    --env-ids models/rm/rm_hh_1b \
    --pc.ncols 4 \
    --pc.ncols-legend 1 \
    --pc.xlabel "Epoch" \
    --output-filename benchmark/trl/pr-1646/rm_hh_1b \
    --scan-history
accelerate launch --config_file examples/accelerate_configs/deepspeed_zero2.yaml \
    examples/scripts/rm/rm.py \
    --dataset_name trl-internal-testing/tldr-preference-trl-style \
    --dataset_train_split train \
    --dataset_eval_split validation \
    --model_name_or_path EleutherAI/pythia-1b-deduped \
    --chat_template simple_concat_with_space \
    --learning_rate 3e-6 \
    --per_device_train_batch_size 8 \
    --per_device_eval_batch_size 8 \
    --gradient_accumulation_steps 4 \
    --logging_steps 1 \
    --eval_strategy steps \
    --max_token_length 1280 \
    --max_prompt_token_lenth 1024 \
    --remove_unused_columns False \
    --num_train_epochs 1 \
    --eval_steps=300 \
    --bf16 \
    --output_dir models/rm/rm_tldr_1b \
    --push_to_hub \
    --hub_model_id trl-internal-testing/rm_tldr_1b

# plot
# pip install openrlbenchmark==0.2.1a5
python -m openrlbenchmark.rlops_multi_metrics \
    --filters '?we=huggingface&wpn=trl&xaxis=train/epoch&ceik=output_dir&cen=output_dir&metrics=train/loss&metrics=eval/accuracy&metrics=eval/loss&metrics=train/grad_norm&metrics=train/learning_rate&metrics=eval/runtime&metrics=eval/samples_per_second&metrics=eval/steps_per_second' \
        "models/rm/rm_tldr_1b?tag=pr-1646" \
    --env-ids models/rm/rm_tldr_1b \
    --pc.ncols 4 \
    --pc.ncols-legend 1 \
    --pc.xlabel "Epoch" \
    --output-filename benchmark/trl/pr-1646/rm_tldr_1b \
    --scan-history

Leveraging 🤗 PEFT to train a reward model

Just pass a peft_config in the keyword arguments of RewardTrainer, and the trainer should automatically take care of converting the model into a PEFT model!

from peft import LoraConfig, TaskType
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from trl import RewardTrainer, RewardConfig

model = AutoModelForSequenceClassification.from_pretrained("gpt2")
peft_config = LoraConfig(
    task_type=TaskType.SEQ_CLS,
    inference_mode=False,
    r=8,
    lora_alpha=32,
    lora_dropout=0.1,
)

...

trainer = RewardTrainer(
    model=model,
    args=training_args,
    tokenizer=tokenizer,
    train_dataset=dataset,
    peft_config=peft_config,
)

trainer.train()

Adding a margin to the loss

As in the Llama 2 paper, you can add a margin to the loss by adding a margin column to the dataset. The reward collator will automatically pass it through and the loss will be computed accordingly.

def add_margin(row):
    # Assume you have a score_chosen and score_rejected columns that you want to use to compute the margin
    return {'margin': row['score_chosen'] - row['score_rejected']}

dataset = dataset.map(add_margin)

RewardConfig

class trl.RewardConfig

< >

( output_dir: str overwrite_output_dir: bool = False do_train: bool = False do_eval: bool = False do_predict: bool = False eval_strategy: Union = 'no' prediction_loss_only: bool = False per_device_train_batch_size: int = 8 per_device_eval_batch_size: int = 8 per_gpu_train_batch_size: Optional = None per_gpu_eval_batch_size: Optional = None gradient_accumulation_steps: int = 1 eval_accumulation_steps: Optional = None eval_delay: Optional = 0 learning_rate: float = 5e-05 weight_decay: float = 0.0 adam_beta1: float = 0.9 adam_beta2: float = 0.999 adam_epsilon: float = 1e-08 max_grad_norm: float = 1.0 num_train_epochs: float = 3.0 max_steps: int = -1 lr_scheduler_type: Union = 'linear' lr_scheduler_kwargs: Union = <factory> warmup_ratio: float = 0.0 warmup_steps: int = 0 log_level: Optional = 'passive' log_level_replica: Optional = 'warning' log_on_each_node: bool = True logging_dir: Optional = None logging_strategy: Union = 'steps' logging_first_step: bool = False logging_steps: float = 500 logging_nan_inf_filter: bool = True save_strategy: Union = 'steps' save_steps: float = 500 save_total_limit: Optional = None save_safetensors: Optional = True save_on_each_node: bool = False save_only_model: bool = False restore_callback_states_from_checkpoint: bool = False no_cuda: bool = False use_cpu: bool = False use_mps_device: bool = False seed: int = 42 data_seed: Optional = None jit_mode_eval: bool = False use_ipex: bool = False bf16: bool = False fp16: bool = False fp16_opt_level: str = 'O1' half_precision_backend: str = 'auto' bf16_full_eval: bool = False fp16_full_eval: bool = False tf32: Optional = None local_rank: int = -1 ddp_backend: Optional = None tpu_num_cores: Optional = None tpu_metrics_debug: bool = False debug: Union = '' dataloader_drop_last: bool = False eval_steps: Optional = None dataloader_num_workers: int = 0 dataloader_prefetch_factor: Optional = None past_index: int = -1 run_name: Optional = None disable_tqdm: Optional = None remove_unused_columns: Optional = True label_names: Optional = None load_best_model_at_end: Optional = False metric_for_best_model: Optional = None greater_is_better: Optional = None ignore_data_skip: bool = False fsdp: Union = '' fsdp_min_num_params: int = 0 fsdp_config: Union = None fsdp_transformer_layer_cls_to_wrap: Optional = None accelerator_config: Union = None deepspeed: Union = None label_smoothing_factor: float = 0.0 optim: Union = 'adamw_torch' optim_args: Optional = None adafactor: bool = False group_by_length: bool = False length_column_name: Optional = 'length' report_to: Union = None ddp_find_unused_parameters: Optional = None ddp_bucket_cap_mb: Optional = None ddp_broadcast_buffers: Optional = None dataloader_pin_memory: bool = True dataloader_persistent_workers: bool = False skip_memory_metrics: bool = True use_legacy_prediction_loop: bool = False push_to_hub: bool = False resume_from_checkpoint: Optional = None hub_model_id: Optional = None hub_strategy: Union = 'every_save' hub_token: Optional = None hub_private_repo: bool = False hub_always_push: bool = False gradient_checkpointing: bool = False gradient_checkpointing_kwargs: Union = None include_inputs_for_metrics: bool = False eval_do_concat_batches: bool = True fp16_backend: str = 'auto' evaluation_strategy: Union = None push_to_hub_model_id: Optional = None push_to_hub_organization: Optional = None push_to_hub_token: Optional = None mp_parameters: str = '' auto_find_batch_size: bool = False full_determinism: bool = False torchdynamo: Optional = None ray_scope: Optional = 'last' ddp_timeout: Optional = 1800 torch_compile: bool = False torch_compile_backend: Optional = None torch_compile_mode: Optional = None dispatch_batches: Optional = None split_batches: Optional = None include_tokens_per_second: Optional = False include_num_input_tokens_seen: Optional = False neftune_noise_alpha: Optional = None optim_target_modules: Union = None batch_eval_metrics: bool = False eval_on_start: bool = False max_length: Optional = None )

Parameters

  • max_length (int, optional, defaults to None) — The maximum length of the sequences in the batch. This argument is required if you want to use the default data collator.
  • gradient_checkpointing (bool, optional, defaults to True) — If True, use gradient checkpointing to save memory at the expense of slower backward pass.

RewardConfig collects all training arguments related to the RewardTrainer class.

Using HfArgumentParser we can turn this class into argparse arguments that can be specified on the command line.

RewardTrainer

class trl.RewardTrainer

< >

( model: Union = None args: Optional = None data_collator: Optional = None train_dataset: Optional = None eval_dataset: Union = None tokenizer: Optional = None model_init: Optional = None compute_metrics: Optional = None callbacks: Optional = None optimizers: Tuple = (None, None) preprocess_logits_for_metrics: Optional = None max_length: Optional = None peft_config: Optional = None )

The RewardTrainer can be used to train your custom Reward Model. It is a subclass of the transformers.Trainer class and inherits all of its attributes and methods. It is recommended to use an AutoModelForSequenceClassification as the reward model. The reward model should be trained on a dataset of paired examples, where each example is a tuple of two sequences. The reward model should be trained to predict which example in the pair is more relevant to the task at hand.

The reward trainer expects a very specific format for the dataset. The dataset should contain two 4 entries at least if you don’t use the default RewardDataCollatorWithPadding data collator. The entries should be named

Optionally, you can also pass a margin entry to the dataset. This entry should contain the margin used to modulate the loss of the reward model as outlined in https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/. If you don’t pass a margin, no margin will be used.

visualize_samples

< >

( num_print_samples: int )

Parameters

  • num_print_samples (int, defaults to 4) — The number of samples to print. Set to -1 to print all samples.

Visualize the reward model logits prediction

< > Update on GitHub