This conceptual guide gives a brief overview of IA3, a parameter-efficient fine tuning technique that is intended to improve over LoRA.
To make fine-tuning more efficient, IA3 (Infused Adapter by Inhibiting and Amplifying Inner Activations) rescales inner activations with learned vectors. These learned vectors are injected in the attention and feedforward modules in a typical transformer-based architecture. These learned vectors are the only trainable parameters during fine-tuning, and thus the original weights remain frozen. Dealing with learned vectors (as opposed to learned low-rank updates to a weight matrix like LoRA) keeps the number of trainable parameters much smaller.
Being similar to LoRA, IA3 carries many of the same advantages:
In principle, IA3 can be applied to any subset of weight matrices in a neural network to reduce the number of trainable parameters. Following the authors’ implementation, IA3 weights are added to the key, value and feedforward layers of a Transformer model. Given the target layers for injecting IA3 parameters, the number of trainable parameters can be determined based on the size of the weight matrices.
As with other methods supported by PEFT, to fine-tune a model using IA3, you need to:
IA3Config
) where you define IA3-specific parameters.get_peft_model()
to get a trainable PeftModel
.PeftModel
as you normally would train the base model.IA3Config
allows you to control how IA3 is applied to the base model through the following parameters:
target_modules
: The modules (for example, attention blocks) to apply the IA3 vectors.feedforward_modules
: The list of modules to be treated as feedforward layers in target_modules
. While learned vectors are multiplied with
the output activation for attention blocks, the vectors are multiplied with the input for classic feedforward layers.modules_to_save
: List of modules apart from IA3 layers to be set as trainable and saved in the final checkpoint. These typically include model’s custom head that is randomly initialized for the fine-tuning task.