Iterative fine-tuning is a training method that enables to perform custom actions (generation and filtering for example) between optimization steps. In TRL we provide an easy-to-use API to fine-tune your models in an iterative way in just a few lines of code.
To get started quickly, instantiate an instance a model, and a tokenizer.
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
trainer = IterativeSFTTrainer(
model,
tokenizer
)
You have the choice to either provide a list of strings or a list of tensors to the step function.
inputs = {
"input_ids": input_ids,
"attention_mask": attention_mask
}
trainer.step(**inputs)
inputs = {
"texts": texts
}
trainer.step(**inputs)
For causal language models, labels will automatically be created from input_ids or from texts. When using sequence to sequence models you will have to provide your own labels or text_labels.
( model: PreTrainedModel = None args: TrainingArguments = None tokenizer: PreTrainedTokenizerBase = None optimizers: typing.Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None) data_collator: typing.Optional[DataCollator] = None eval_dataset: typing.Union[datasets.arrow_dataset.Dataset, typing.Dict[str, datasets.arrow_dataset.Dataset], NoneType] = None max_length: typing.Optional[int] = None truncation_mode: typing.Optional[str] = 'keep_end' preprocess_logits_for_metrics: typing.Union[typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor], NoneType] = None compute_metrics: typing.Union[typing.Callable[[transformers.trainer_utils.EvalLoopOutput], typing.Dict], NoneType] = None optimize_device_cache: typing.Optional[bool] = False )
Parameters
PreTrainedModel
) — Model to be optimized, either an ‘AutoModelForCausalLM’ or an ‘AutoModelForSeq2SeqLM’. —
Check the documentation of PreTrainedModel
for more details. transformers.TrainingArguments
) — — The arguments to use for training. PreTrainedTokenizerBase
) — Tokenizer to be used for encoding the —
data. Check the documentation of transformers.PreTrainedTokenizer
and
transformers.PreTrainedTokenizerFast
for more details. Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR]
) — — The optimizer and scheduler to use for training. datasets.Dataset
) — The dataset to use for evaluation. int
, defaults to None
) — — The maximum length of the input. str
, defaults to keep_end
) — — The truncation mode to use, either keep_end
or keep_start
. Callable[[torch.Tensor, torch.Tensor], torch.Tensor]
) — — The function to use to preprocess the logits before computing the metrics. Callable[[EvalPrediction], Dict]
, optional) — — The function to use to compute the metrics. Must take a EvalPrediction
and return a dictionary string to metric values. bool
, optional*, defaults to False
) — Optimize CUDA cache for slightly more memory-efficient training. — The IterativeSFTTrainer can be used to finetune models with methods that requires some steps between optimization.
( input_ids: typing.Optional[typing.List[torch.LongTensor]] = None attention_mask: typing.Optional[typing.List[torch.LongTensor]] = None labels: typing.Optional[typing.List[torch.LongTensor]] = None texts: typing.Optional[typing.List[str]] = None texts_labels: typing.Optional[typing.List[str]] = None ) → dict[str, Any]
Parameters
torch.LongTensor
) —
List of tensors containing the input_ids (if not provided, text will be used) torch.LongTensor
, , optional) —
List of tensors containing the attention_mask torch.FloatTensor
, optional) —
List of tensors containing the labels (if set to None, will default to input_ids) str
, optional) —
List of strings containing the text input (if not provided, input_ids will directly be used) str
, optional) —
List of strings containing the text labels (if set to None, will default to text) Returns
dict[str, Any]
A summary of the training statistics
Run an optimisation step given a list of input_ids, attention_mask, and labels or a list of text and text_labels.