# Training customization TRL is designed with modularity in mind so that users to be able to efficiently customize the training loop for their needs. Below are some examples on how you can apply and test different techniques. ## Train on multiple GPUs / nodes The trainers in TRL use 🤗 Accelerate to enable distributed training across multiple GPUs or nodes. To do so, first create an 🤗 Accelerate config file by running ```bash accelerate config ``` and answering the questions according to your multi-gpu / multi-node setup. You can then launch distributed training by running: ```bash accelerate launch your_script.py ``` We also provide config files in the [examples folder](https://github.com/huggingface/trl/tree/main/examples/accelerate_configs) that can be used as templates. To use these templates, simply pass the path to the config file when launching a job, e.g.: ```shell accelerate launch --config_file=examples/accelerate_configs/multi_gpu.yaml --num_processes {NUM_GPUS} path_to_script.py --all_arguments_of_the_script ``` Refer to the [examples page](https://github.com/huggingface/trl/tree/main/examples) for more details. ### Distributed training with DeepSpeed All of the trainers in TRL can be run on multiple GPUs together with DeepSpeed ZeRO-{1,2,3} for efficient sharding of the optimizer states, gradients, and model weights. To do so, run: ```shell accelerate launch --config_file=examples/accelerate_configs/deepspeed_zero{1,2,3}.yaml --num_processes {NUM_GPUS} path_to_your_script.py --all_arguments_of_the_script ``` Note that for ZeRO-3, a small tweak is needed to initialize your reward model on the correct device via the `zero3_init_context_manager()` context manager. In particular, this is needed to avoid DeepSpeed hanging after a fixed number of training steps. Here is a snippet of what is involved from the [`sentiment_tuning`](https://github.com/huggingface/trl/blob/main/examples/scripts/ppo.py) example: ```python ds_plugin = ppo_trainer.accelerator.state.deepspeed_plugin if ds_plugin is not None and ds_plugin.is_zero3_init_enabled(): with ds_plugin.zero3_init_context_manager(enable=False): sentiment_pipe = pipeline("sentiment-analysis", model="lvwerra/distilbert-imdb", device=device) else: sentiment_pipe = pipeline("sentiment-analysis", model="lvwerra/distilbert-imdb", device=device) ``` Consult the 🤗 Accelerate [documentation](https://huggingface.co/docs/accelerate/usage_guides/deepspeed) for more information about the DeepSpeed plugin. ## Use different optimizers By default, the `PPOTrainer` creates a `torch.optim.Adam` optimizer. You can create and define a different optimizer and pass it to `PPOTrainer`: ```python import torch from transformers import GPT2Tokenizer from trl import PPOTrainer, PPOConfig, AutoModelForCausalLMWithValueHead # 1. load a pretrained model model = AutoModelForCausalLMWithValueHead.from_pretrained('gpt2') ref_model = AutoModelForCausalLMWithValueHead.from_pretrained('gpt2') tokenizer = GPT2Tokenizer.from_pretrained('gpt2') # 2. define config ppo_config = {'batch_size': 1, 'learning_rate':1e-5} config = PPOConfig(**ppo_config) # 2. Create optimizer optimizer = torch.optim.SGD(model.parameters(), lr=config.learning_rate) # 3. initialize trainer ppo_trainer = PPOTrainer(config, model, ref_model, tokenizer, optimizer=optimizer) ``` For memory efficient fine-tuning, you can also pass `Adam8bit` optimizer from `bitsandbytes`: ```python import torch import bitsandbytes as bnb from transformers import GPT2Tokenizer from trl import PPOTrainer, PPOConfig, AutoModelForCausalLMWithValueHead # 1. load a pretrained model model = AutoModelForCausalLMWithValueHead.from_pretrained('gpt2') ref_model = AutoModelForCausalLMWithValueHead.from_pretrained('gpt2') tokenizer = GPT2Tokenizer.from_pretrained('gpt2') # 2. define config ppo_config = {'batch_size': 1, 'learning_rate':1e-5} config = PPOConfig(**ppo_config) # 2. Create optimizer optimizer = bnb.optim.Adam8bit(model.parameters(), lr=config.learning_rate) # 3. initialize trainer ppo_trainer = PPOTrainer(config, model, ref_model, tokenizer, optimizer=optimizer) ``` ### Use LION optimizer You can use the new [LION optimizer from Google](https://huggingface.co/papers/2302.06675) as well, first take the source code of the optimizer definition [here](https://github.com/lucidrains/lion-pytorch/blob/main/lion_pytorch/lion_pytorch.py), and copy it so that you can import the optimizer. Make sure to initialize the optimizer by considering the trainable parameters only for a more memory efficient training: ```python optimizer = Lion(filter(lambda p: p.requires_grad, self.model.parameters()), lr=self.config.learning_rate) ... ppo_trainer = PPOTrainer(config, model, ref_model, tokenizer, optimizer=optimizer) ``` We advise you to use the learning rate that you would use for `Adam` divided by 3 as pointed out [here](https://github.com/lucidrains/lion-pytorch#lion---pytorch). We observed an improvement when using this optimizer compared to classic Adam (check the full logs [here](https://wandb.ai/distill-bloom/trl/runs/lj4bheke?workspace=user-younesbelkada)):