Section under construction. Feel free to contribute!
Sequence lengths in the dataset can vary widely. When data is batched, sequences are padded to match the longest one in the batch, which can cause high memory usage, even if most sequences are relatively short.
To reduce memory usage, it’s important to truncate sequences to a reasonable length. While TRL trainers truncate sequences by default, you may want to adjust the default truncation length to better align with your specific use case.
DPO truncation is applied first to the prompt and to the completion via the max_prompt_length
and max_completion_length
parameters. The max_length
parameter is then used to truncate the resulting sequence.
To set the truncation parameters, use the following code snippet:
from trl import DPOConfig
training_args = DPOConfig(..., max_prompt_length=..., max_length=...)
You can also use the max_completion_length
parameter to truncate the completion, though this is less common since the goal is typically to preserve the completion’s full length whenever possible.
from trl import DPOConfig
training_args = DPOConfig(..., max_completion_length=...)
This technique applies only to SFT.
Truncation has several drawbacks:
Packing, introduced in Raffel et al., 2020, addresses these issues by grouping sequences instead of truncating. It concatenates and splits dataset sequences into the desired lengths.
Packing eliminates padding, preserves all sequence information, and allows for flexible sequence lengths, making it a more efficient alternative to truncation. To enable packing, use packing=True
in the SFTConfig:
from trl import SFTConfig
training_args = SFTConfig(..., packing=True, max_seq_length=512)
Packing may cause batch contamination, where adjacent sequences influence one another. This can be problematic for some applications. For more details, see #1230.