PeftModel is the base model class for specifying the base Transformer model and configuration to apply a PEFT method to. The base PeftModel
contains methods for loading and saving models from the Hub, and supports the PromptEncoder for prompt learning.
( model: PreTrainedModel peft_config: PeftConfig adapter_name: str = 'default' )
Parameters
str
) — The name of the adapter, defaults to "default"
.
Base model encompassing various Peft methods.
Attributes:
list
of str
) — The list of sub-module names to save when
saving the model.torch.Tensor
) — The virtual prompt tokens used for Peft if
using PromptLearningConfig.str
) — The name of the transformer
backbone in the base model if using PromptLearningConfig.torch.nn.Embedding
) — The word embeddings of the transformer backbone
in the base model if using PromptLearningConfig.Updates or create model card to include information about peft:
peft
library tagDisables the adapter module.
Forward pass of the model.
( model: PreTrainedModel model_id: Union[str, os.PathLike] adapter_name: str = 'default' is_trainable: bool = False config: Optional[PeftConfig] = None **kwargs: Any )
Parameters
str
or os.PathLike
) —
The name of the PEFT configuration to use. Can be either:model id
of a PEFT configuration hosted inside a model repo on the Hugging Face
Hub.save_pretrained
method (./my_peft_config_directory/
).str
, optional, defaults to "default"
) —
The name of the adapter to be loaded. This is useful for loading multiple adapters.
bool
, optional, defaults to False
) —
Whether the adapter should be trainable or not. If False
, the adapter will be frozen and use for
inference
model_id
and kwargs
. This is useful when configuration is already
loaded before calling from_pretrained
.
kwargs — (optional
):
Additional keyword arguments passed along to the specific PEFT configuration class.
Instantiate a PEFT model from a pretrained model and loaded PEFT weights.
Note that the passed model
may be modified inplace.
Returns the base model.
Returns the number of trainable parameters and number of all parameters in the model.
Returns the virtual prompts to use for Peft. Only applicable when peft_config.peft_type != PeftType.LORA
.
Returns the prompt embedding to save when saving the model. Only applicable when peft_config.peft_type != PeftType.LORA
.
Prints the number of trainable parameters in the model.
( save_directory: str safe_serialization: bool = False selected_adapters: Optional[List[str]] = None **kwargs: Any )
This function saves the adapter model and the adapter configuration files to a directory, so that it can be
reloaded using the PeftModel.from_pretrained() class method, and also used by the PeftModel.push_to_hub()
method.
Sets the active adapter.
A PeftModel
for sequence classification tasks.
( model peft_config: PeftConfig adapter_name = 'default' )
Parameters
Peft model for sequence classification tasks.
Attributes:
str
) — The name of the classification layer.Example:
>>> from transformers import AutoModelForSequenceClassification
>>> from peft import PeftModelForSequenceClassification, get_peft_config
>>> config = {
... "peft_type": "PREFIX_TUNING",
... "task_type": "SEQ_CLS",
... "inference_mode": False,
... "num_virtual_tokens": 20,
... "token_dim": 768,
... "num_transformer_submodules": 1,
... "num_attention_heads": 12,
... "num_layers": 12,
... "encoder_hidden_size": 768,
... "prefix_projection": False,
... "postprocess_past_key_value_function": None,
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForSequenceClassification(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 370178 || all params: 108680450 || trainable%: 0.3406113979101117
A PeftModel
for token classification tasks.
( model peft_config: PeftConfig = None adapter_name = 'default' )
Parameters
Peft model for token classification tasks.
Attributes:
str
) — The name of the classification layer.Example:
>>> from transformers import AutoModelForSequenceClassification
>>> from peft import PeftModelForTokenClassification, get_peft_config
>>> config = {
... "peft_type": "PREFIX_TUNING",
... "task_type": "TOKEN_CLS",
... "inference_mode": False,
... "num_virtual_tokens": 20,
... "token_dim": 768,
... "num_transformer_submodules": 1,
... "num_attention_heads": 12,
... "num_layers": 12,
... "encoder_hidden_size": 768,
... "prefix_projection": False,
... "postprocess_past_key_value_function": None,
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForTokenClassification.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForTokenClassification(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 370178 || all params: 108680450 || trainable%: 0.3406113979101117
A PeftModel
for causal language modeling.
( model peft_config: PeftConfig adapter_name = 'default' )
Parameters
Peft model for causal language modeling.
Example:
>>> from transformers import AutoModelForCausalLM
>>> from peft import PeftModelForCausalLM, get_peft_config
>>> config = {
... "peft_type": "PREFIX_TUNING",
... "task_type": "CAUSAL_LM",
... "inference_mode": False,
... "num_virtual_tokens": 20,
... "token_dim": 1280,
... "num_transformer_submodules": 1,
... "num_attention_heads": 20,
... "num_layers": 36,
... "encoder_hidden_size": 1280,
... "prefix_projection": False,
... "postprocess_past_key_value_function": None,
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForCausalLM.from_pretrained("gpt2-large")
>>> peft_model = PeftModelForCausalLM(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 1843200 || all params: 775873280 || trainable%: 0.23756456724479544
A PeftModel
for sequence-to-sequence language modeling.
( model peft_config: PeftConfig adapter_name = 'default' )
Parameters
Peft model for sequence-to-sequence language modeling.
Example:
>>> from transformers import AutoModelForSeq2SeqLM
>>> from peft import PeftModelForSeq2SeqLM, get_peft_config
>>> config = {
... "peft_type": "LORA",
... "task_type": "SEQ_2_SEQ_LM",
... "inference_mode": False,
... "r": 8,
... "target_modules": ["q", "v"],
... "lora_alpha": 32,
... "lora_dropout": 0.1,
... "fan_in_fan_out": False,
... "enable_lora": None,
... "bias": "none",
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> peft_model = PeftModelForSeq2SeqLM(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 884736 || all params: 223843584 || trainable%: 0.3952474242013566
A PeftModel
for question answering.
( model peft_config: PeftConfig = None adapter_name = 'default' )
Parameters
Peft model for extractive question answering.
Attributes:
str
) — The name of the classification layer.Example:
>>> from transformers import AutoModelForQuestionAnswering
>>> from peft import PeftModelForQuestionAnswering, get_peft_config
>>> config = {
... "peft_type": "LORA",
... "task_type": "QUESTION_ANS",
... "inference_mode": False,
... "r": 16,
... "target_modules": ["query", "value"],
... "lora_alpha": 32,
... "lora_dropout": 0.05,
... "fan_in_fan_out": False,
... "bias": "none",
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForQuestionAnswering.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForQuestionAnswering(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 592900 || all params: 108312580 || trainable%: 0.5473971721475013
A PeftModel
for getting extracting features/embeddings from transformer models.
( model peft_config: PeftConfig = None adapter_name = 'default' )
Parameters
Peft model for extracting features/embeddings from transformer models
Attributes:
Example:
>>> from transformers import AutoModel
>>> from peft import PeftModelForFeatureExtraction, get_peft_config
>>> config = {
... "peft_type": "LORA",
... "task_type": "FEATURE_EXTRACTION",
... "inference_mode": False,
... "r": 16,
... "target_modules": ["query", "value"],
... "lora_alpha": 32,
... "lora_dropout": 0.05,
... "fan_in_fan_out": False,
... "bias": "none",
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModel.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForFeatureExtraction(model, peft_config)
>>> peft_model.print_trainable_parameters()