TRL - Transformer Reinforcement Learning

TRL is a full stack library where we provide a set of tools to train transformer language models with Reinforcement Learning, from the Supervised Fine-tuning step (SFT), Reward Modeling step (RM) to the Proximal Policy Optimization (PPO) step. The library is integrated with 🤗 transformers.

Check the appropriate sections of the documentation depending on your needs:

API documentation

Examples

Blog posts

thumbnail

Preference Optimization for Vision Language Models with TRL

thumbnail

Illustrating Reinforcement Learning from Human Feedback

thumbnail

Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU

thumbnail

StackLLaMA: A hands-on guide to train LLaMA with RLHF

thumbnail

Fine-tune Llama 2 with DPO

thumbnail

Finetune Stable Diffusion Models with DDPO via TRL

< > Update on GitHub