Section under construction. Feel free to contribute!
Online methods such as Online DPO or Nash-MD require the model to generate completions, which is often a slow process and can significantly impact training time. To speed up generation, you can use vLLM, a library that enables fast generation through PagedAttention. TRL’s online trainers support vLLM, greatly improving training speed.
To use vLLM, first install it using:
pip install vllm
Then, enable it by passing use_vllm=True
in the training arguments.
from trl import OnlineDPOConfig
training_args = DPOConfig(..., use_vllm=True)