Context-aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods (CPT) combines In-Context Learning (ICL) with Prompt Tuning (PT) and adversarial optimization to improve few-shot learning by refining context embeddings. CPT optimizes only context tokens, which minimizes overfitting and enhances performance on classification tasks.
The abstract from the paper is:
Traditional fine-tuning is effective but computationally intensive, as it requires updating billions of parameters. CPT, inspired by ICL, PT, and adversarial attacks, refines context embeddings in a parameter-efficient manner. By optimizing context tokens and applying a controlled gradient descent, CPT achieves superior accuracy across various few-shot classification tasks, showing significant improvement over existing methods such as LoRA, PT, and ICL.
( task_type: typing.Union[str, peft.utils.peft_types.TaskType, NoneType] = None peft_type: typing.Union[str, peft.utils.peft_types.PeftType, NoneType] = None auto_mapping: typing.Optional[dict] = None base_model_name_or_path: typing.Optional[str] = None revision: typing.Optional[str] = None inference_mode: bool = False num_virtual_tokens: int = None token_dim: int = None num_transformer_submodules: typing.Optional[int] = None num_attention_heads: typing.Optional[int] = None num_layers: typing.Optional[int] = None cpt_token_ids: typing.Optional[list[int]] = None cpt_mask: typing.Optional[list[int]] = None cpt_tokens_type_mask: typing.Optional[list[int]] = None opt_weighted_loss_type: typing.Optional[typing.Literal['none', 'decay']] = 'none' opt_loss_decay_factor: typing.Optional[float] = 1.0 opt_projection_epsilon: typing.Optional[float] = 0.1 opt_projection_format_epsilon: typing.Optional[float] = 0.1 tokenizer_name_or_path: typing.Optional[str] = None )
CPT Configuration class extending PeftConfig for Context-aware Prompt Tuning (CPT).
This class introduces additional parameters required for CPT, such as:
For more details, see the paper: https://arxiv.org/abs/2410.17222
CPTEmbedding is a custom embedding layer designed for Context-aware Prompt Tuning (CPT) in PEFT. It initializes embeddings, applies prompt-specific projections, and computes loss using label masks.
( base_model_output labels cpt_type_mask config ) → ModelOutput
Parameters
Returns
ModelOutput
The base model output with computed loss.
Computes the loss for CPT models with optional exponential decay.
( indices ) → torch.Tensor
Computes the prompt embeddings and applies delta adjustments.
Applies epsilon-based projection to the delta embeddings to control their norm.
Sets up a backward hook to selectively update token gradients based on the CPT token type mask.