Low-Rank Hadamard Product (LoHa), is similar to LoRA except it approximates the large weight matrix with more low-rank matrices and combines them with the Hadamard product. This method is even more parameter-efficient than LoRA and achieves comparable performance.
The abstract from the paper is:
In this work, we propose a communication-efficient parameterization, FedPara, for federated learning (FL) to overcome the burdens on frequent model uploads and downloads. Our method re-parameterizes weight parameters of layers using low-rank weights followed by the Hadamard product. Compared to the conventional low-rank parameterization, our FedPara method is not restricted to low-rank constraints, and thereby it has a far larger capacity. This property enables to achieve comparable performance while requiring 3 to 10 times lower communication costs than the model with the original layers, which is not achievable by the traditional low-rank methods. The efficiency of our method can be further improved by combining with other efficient FL optimizers. In addition, we extend our method to a personalized FL application, pFedPara, which separates parameters into global and local ones. We show that pFedPara outperforms competing personalized FL methods with more than three times fewer parameters.
( peft_type: Union = None auto_mapping: Optional = None base_model_name_or_path: Optional = None revision: Optional = None task_type: Union = None inference_mode: bool = False rank_pattern: Optional[dict] = <factory> alpha_pattern: Optional[dict] = <factory> r: int = 8 alpha: int = 8 rank_dropout: float = 0.0 module_dropout: float = 0.0 use_effective_conv2d: bool = False target_modules: Union = None init_weights: bool = True layers_to_transform: Union = None layers_pattern: Optional = None modules_to_save: Optional = None )
Parameters
int
) —
LoHa rank. int
) —
The alpha parameter for LoHa scaling. float
) —
The dropout probability for rank dimension during training. float
) —
The dropout probability for disabling LoHa modules during training. bool
) —
Use parameter effective decomposition for Conv2d with ksize > 1 (“Proposition 3” from FedPara paper). Optional[Union[List[str], str]]
) —
The names of the modules to apply the adapter to. If this is specified, only the modules with the specified
names will be replaced. When passing a string, a regex match will be performed. When passing a list of
strings, either an exact match will be performed or it is checked if the name of the module ends with any
of the passed strings. If this is specified as ‘all-linear’, then all linear/Conv1D modules are chosen,
excluding the output layer. If this is not specified, modules will be chosen according to the model
architecture. If the architecture is not known, an error will be raised — in this case, you should specify
the target modules manually. bool
) —
Whether to perform initialization of adapter weights. This defaults to True
, passing False
is
discouraged. Union[List[int], int]
) —
The layer indices to transform. If a list of ints is passed, it will apply the adapter to the layer indices
that are specified in this list. If a single integer is passed, it will apply the transformations on the
layer at this index. str
) —
The layer pattern name, used only if layers_to_transform
is different from None
. dict
) —
The mapping from layer names or regexp expression to ranks which are different from the default rank
specified by r
. dict
) —
The mapping from layer names or regexp expression to alphas which are different from the default alpha
specified by alpha
. Optional[List[str]]
) —
List of modules apart from adapter layers to be set as trainable and saved in the final checkpoint. This is the configuration class to store the configuration of a LoHaModel.
( model config adapter_name ) → torch.nn.Module
Parameters
torch.nn.Module
) — The model to which the adapter tuner layers will be attached. str
) — The name of the adapter, defaults to "default"
. Returns
torch.nn.Module
The LoHa model.
Creates Low-Rank Hadamard Product model from a pretrained model. The method is partially described in https://arxiv.org/abs/2108.06098 Current implementation heavily borrows from https://github.com/KohakuBlueleaf/LyCORIS/blob/eb460098187f752a5d66406d3affade6f0a07ece/lycoris/modules/loha.py
Example:
>>> from diffusers import StableDiffusionPipeline
>>> from peft import LoHaModel, LoHaConfig
>>> config_te = LoHaConfig(
... r=8,
... lora_alpha=32,
... target_modules=["k_proj", "q_proj", "v_proj", "out_proj", "fc1", "fc2"],
... rank_dropout=0.0,
... module_dropout=0.0,
... init_weights=True,
... )
>>> config_unet = LoHaConfig(
... r=8,
... lora_alpha=32,
... target_modules=[
... "proj_in",
... "proj_out",
... "to_k",
... "to_q",
... "to_v",
... "to_out.0",
... "ff.net.0.proj",
... "ff.net.2",
... ],
... rank_dropout=0.0,
... module_dropout=0.0,
... init_weights=True,
... use_effective_conv2d=True,
... )
>>> model = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
>>> model.text_encoder = LoHaModel(model.text_encoder, config_te, "default")
>>> model.unet = LoHaModel(model.unet, config_unet, "default")
Attributes:
~torch.nn.Module
) — The model to be adapted.