LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. This produces a smaller file (~100 MBs) and makes it easier to quickly train a model to learn a new concept. LoRA weights are typically loaded into the UNet, text encoder or both. There are two classes for loading LoRA weights:
LoraLoaderMixin
provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model.StableDiffusionXLLoraLoaderMixin
is a Stable Diffusion (SDXL) version of the LoraLoaderMixin
class for loading and saving LoRA weights. It can only be used with the SDXL model.To learn more about how to load LoRA weights, see the LoRA loading guide.
Load LoRA layers into UNet2DConditionModel and
CLIPTextModel
.
( adapter_names: Union )
( text_encoder: Optional = None )
Disables the LoRA layers for the text encoder.
( text_encoder: Optional = None )
Enables the LoRA layers for the text encoder.
( fuse_unet: bool = True fuse_text_encoder: bool = True lora_scale: float = 1.0 safe_fusing: bool = False adapter_names: Optional = None )
Parameters
bool
, defaults to True
) — Whether to fuse the UNet LoRA parameters. bool
, defaults to True
) —
Whether to fuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the
LoRA parameters then it won’t have any effect. float
, defaults to 1.0) —
Controls how much to influence the outputs with the LoRA parameters. bool
, defaults to False
) —
Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. List[str]
, optional) —
Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused. Fuses the LoRA parameters into the original parameters of the corresponding blocks.
This is an experimental API.
Example:
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
).to("cuda")
pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel")
pipeline.fuse_lora(lora_scale=0.7)
Gets the list of the current active adapters.
Gets the current list of all available adapters in the pipeline.
( state_dict network_alphas text_encoder prefix = None lora_scale = 1.0 low_cpu_mem_usage = None adapter_name = None _pipeline = None )
Parameters
dict
) —
A standard state dict containing the lora layer parameters. The key should be prefixed with an
additional text_encoder
to distinguish between unet lora layers. Dict[str, float]
) —
See LoRALinearLayer
for more details. CLIPTextModel
) —
The text encoder model to load the LoRA layers into. str
) —
Expected prefix of the text_encoder
in the state_dict
. float
) —
How much to scale the output of the lora linear layer before it is added with the output of the regular
lora layer. bool
, optional, defaults to True
if torch version >= 1.9.0 else False
) —
Speed up model loading only loading the pretrained weights and not initializing the weights. This also
tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model.
Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this
argument to True
will raise an error. str
, optional) —
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
default_{i}
where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict
into text_encoder
( state_dict network_alphas transformer low_cpu_mem_usage = None adapter_name = None _pipeline = None )
Parameters
dict
) —
A standard state dict containing the lora layer parameters. The keys can either be indexed directly
into the unet or prefixed with an additional unet
which can be used to distinguish between text
encoder lora layers. Dict[str, float]
) —
See LoRALinearLayer
for more details. UNet2DConditionModel
) —
The UNet model to load the LoRA layers into. bool
, optional, defaults to True
if torch version >= 1.9.0 else False
) —
Speed up model loading only loading the pretrained weights and not initializing the weights. This also
tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model.
Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this
argument to True
will raise an error. str
, optional) —
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
default_{i}
where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict
into transformer
.
( state_dict network_alphas unet low_cpu_mem_usage = None adapter_name = None _pipeline = None )
Parameters
dict
) —
A standard state dict containing the lora layer parameters. The keys can either be indexed directly
into the unet or prefixed with an additional unet
which can be used to distinguish between text
encoder lora layers. Dict[str, float]
) —
See LoRALinearLayer
for more details. UNet2DConditionModel
) —
The UNet model to load the LoRA layers into. bool
, optional, defaults to True
if torch version >= 1.9.0 else False
) —
Speed up model loading only loading the pretrained weights and not initializing the weights. This also
tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model.
Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this
argument to True
will raise an error. str
, optional) —
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
default_{i}
where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict
into unet
.
( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs )
Parameters
str
or os.PathLike
or dict
) —
See lora_state_dict(). dict
, optional) —
See lora_state_dict(). str
, optional) —
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
default_{i}
where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict
into self.unet
and
self.text_encoder
.
All kwargs are forwarded to self.lora_state_dict
.
See lora_state_dict() for more details on how the state dict is loaded.
See load_lora_into_unet() for more details on how the state dict is loaded into
self.unet
.
See load_lora_into_text_encoder() for more details on how the state dict is loaded
into self.text_encoder
.
( pretrained_model_name_or_path_or_dict: Union **kwargs )
Parameters
str
or os.PathLike
or dict
) —
Can be either:
google/ddpm-celebahq-256
) of a pretrained model hosted on
the Hub../my_model_directory
) containing the model weights saved
with ModelMixin.save_pretrained().Union[str, os.PathLike]
, optional) —
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
is not used. bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. bool
, optional, defaults to False
) —
Whether or not to resume downloading the model weights and configuration files. If set to False
, any
incompletely downloaded files are deleted. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. bool
, optional, defaults to False
) —
Whether to only load local model weights and configuration files or not. If set to True
, the model
won’t be downloaded from the Hub. str
or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True
, the token generated from
diffusers-cli login
(stored in ~/.huggingface
) is used. str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
allowed by Git. str
, optional, defaults to ""
) —
The subfolder location of a model file within a larger model repository on the Hub or locally. bool
, optional, defaults to True
if torch version >= 1.9.0 else False
) —
Speed up model loading only loading the pretrained weights and not initializing the weights. This also
tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model.
Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this
argument to True
will raise an error. str
, optional) —
Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not
guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
information. Return state dict for lora weights and the network alphas.
We support loading A1111 formatted LoRA checkpoints in a limited capacity.
This function is experimental and might change in the future.
( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True )
Parameters
str
or os.PathLike
) —
Directory to save LoRA parameters to. Will be created if it doesn’t exist. Dict[str, torch.nn.Module]
or Dict[str, torch.Tensor]
) —
State dict of the LoRA layers corresponding to the unet
. Dict[str, torch.nn.Module]
or Dict[str, torch.Tensor]
) —
State dict of the LoRA layers corresponding to the text_encoder
. Must explicitly pass the text
encoder LoRA state dict because it comes from 🤗 Transformers. bool
, optional, defaults to True
) —
Whether the process calling this is the main process or not. Useful during distributed training and you
need to call this function on all processes. In this case, set is_main_process=True
only on the main
process to avoid race conditions. Callable
) —
The function to use to save the state dictionary. Useful during distributed training when you need to
replace torch.save
with another method. Can be configured with the environment variable
DIFFUSERS_SAVE_MODE
. bool
, optional, defaults to True
) —
Whether to save the model using safetensors
or the traditional PyTorch way with pickle
. Save the LoRA parameters corresponding to the UNet and text encoder.
( adapter_names: Union text_encoder: Optional = None text_encoder_weights: Union = None )
Parameters
List[str]
or str
) —
The names of the adapters to use. torch.nn.Module
, optional) —
The text encoder module to set the adapter layers for. If None
, it will try to get the text_encoder
attribute. List[float]
, optional) —
The weights to use for the text encoder. If None
, the weights are set to 1.0
for all the adapters. Sets the adapter layers for the text encoder.
( adapter_names: List device: Union )
Moves the LoRAs listed in adapter_names
to a target device. Useful for offloading the LoRA to the CPU in case
you want to load multiple adapters and free some GPU memory.
( unfuse_unet: bool = True unfuse_text_encoder: bool = True )
Reverses the effect of
pipe.fuse_lora()
.
This is an experimental API.
Unloads the LoRA parameters.
This class overrides LoraLoaderMixin
with LoRA loading/saving code that’s specific to SDXL
( pretrained_model_name_or_path_or_dict: Union adapter_name: Optional = None **kwargs )
Parameters
str
or os.PathLike
or dict
) —
See lora_state_dict(). str
, optional) —
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use
default_{i}
where i is the total number of adapters being loaded. dict
, optional) —
See lora_state_dict(). Load LoRA weights specified in pretrained_model_name_or_path_or_dict
into self.unet
and
self.text_encoder
.
All kwargs are forwarded to self.lora_state_dict
.
See lora_state_dict() for more details on how the state dict is loaded.
See load_lora_into_unet() for more details on how the state dict is loaded into
self.unet
.
See load_lora_into_text_encoder() for more details on how the state dict is loaded
into self.text_encoder
.