TRL is a full stack library where we provide a set of tools to train transformer language models with Reinforcement Learning, from the Supervised Fine-tuning step (SFT), Reward Modeling step (RM) to the Proximal Policy Optimization (PPO) step. The library is integrated with 🤗 transformers.
Check the appropriate sections of the documentation depending on your needs:
SFTTrainer
: Supervise Fine-tune your model easily with SFTTrainer
RewardTrainer
: Train easily your reward model using RewardTrainer
.PPOTrainer
: Further fine-tune the supervised fine-tuned model using PPO algorithmDPOTrainer
: Direct Preference Optimization training using DPOTrainer
.TextEnvironment
: Text environment to train your model using tools with RL.TextEnvironments
Preference Optimization for Vision Language Models with TRL
Illustrating Reinforcement Learning from Human Feedback
Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU
StackLLaMA: A hands-on guide to train LLaMA with RLHF
Fine-tune Llama 2 with DPO
Finetune Stable Diffusion Models with DDPO via TRL