TRL is a full stack library where we provide a set of tools to train transformer language models with Reinforcement Learning, from the Supervised Fine-tuning step (SFT), Reward Modeling step (RM) to the Proximal Policy Optimization (PPO) step. The library is integrated with 🤗 transformers.
Learn post-training with TRL and other libraries in 🤗 smol course.
SFTTrainer
: Supervise Fine-tune your model easily with SFTTrainer
RewardTrainer
: Train easily your reward model using RewardTrainer
.PPOTrainer
: Further fine-tune the supervised fine-tuned model using PPO algorithmDPOTrainer
: Direct Preference Optimization training using DPOTrainer
.TextEnvironment
: Text environment to train your model using tools with RL.TextEnvironments
Published on July 10, 2024
Preference Optimization for Vision Language Models with TRL
Published on June 12, 2024
Putting RL back in RLHF
Published on September 29, 2023
Finetune Stable Diffusion Models with DDPO via TRL
Published on August 8, 2023
Fine-tune Llama 2 with DPO
Published on April 5, 2023
StackLLaMA: A hands-on guide to train LLaMA with RLHF
Published on March 9, 2023
Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU
Published on December 9, 2022
Illustrating Reinforcement Learning from Human Feedback