TRL - Transformer Reinforcement Learning

TRL is a full stack library where we provide a set of tools to train transformer language models with Reinforcement Learning, from the Supervised Fine-tuning step (SFT), Reward Modeling step (RM) to the Proximal Policy Optimization (PPO) step. The library is integrated with 🤗 transformers.

Learn post-training

Learn post-training with the 🤗 smol course.

API documentation

Examples

Blog posts

thumbnail

Published on July 10, 2024

Preference Optimization for Vision Language Models with TRL

thumbnail

Published on June 12, 2024

Putting RL back in RLHF

thumbnail

Published on September 29, 2023

Finetune Stable Diffusion Models with DDPO via TRL

thumbnail

Published on August 8, 2023

Fine-tune Llama 2 with DPO

thumbnail

Published on April 5, 2023

StackLLaMA: A hands-on guide to train LLaMA with RLHF

thumbnail

Published on March 9, 2023

Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU

thumbnail

Published on December 9, 2022

Illustrating Reinforcement Learning from Human Feedback

< > Update on GitHub