The DiffLlama model was proposed in Differential Transformer by Kazuma Matsumoto and . This model is combine Llama model and Differential Transformer’s Attention.
The abstract from the paper is the following:
Transformer tends to overallocate attention to irrelevant context. In this work, we introduce Diff Transformer, which amplifies attention to the relevant context while canceling noise. Specifically, the differential attention mechanism calculates attention scores as the difference between two separate softmax attention maps. The subtraction cancels noise, promoting the emergence of sparse attention patterns. Experimental results on language modeling show that Diff Transformer outperforms Transformer in various settings of scaling up model size and training tokens. More intriguingly, it offers notable advantages in practical applications, such as long-context modeling, key information retrieval, hallucination mitigation, in-context learning, and reduction of activation outliers. By being less distracted by irrelevant context, Diff Transformer can mitigate hallucination in question answering and text summarization. For in-context learning, Diff Transformer not only enhances accuracy but is also more robust to order permutation, which was considered as a chronic robustness issue. The results position Diff Transformer as a highly effective and promising architecture to advance large language models.
The hyperparameters of this model is the same as Llama model.
( vocab_size = 32000 hidden_size = 2048 intermediate_size = 8192 num_hidden_layers = 16 num_attention_heads = 32 num_key_value_heads = None hidden_act = 'silu' max_position_embeddings = 2048 initializer_range = 0.02 rms_norm_eps = 1e-05 use_cache = True pad_token_id = None bos_token_id = 1 eos_token_id = 2 tie_word_embeddings = False rope_theta = 10000.0 rope_scaling = None attention_bias = False attention_dropout = 0.0 lambda_std_dev = 0.1 head_dim = None **kwargs )
Parameters
int
, optional, defaults to 32000) —
Vocabulary size of the DiffLlama model. Defines the number of different tokens that can be represented by the
inputs_ids
passed when calling DiffLlamaModel int
, optional, defaults to 2048) —
Dimension of the hidden representations. int
, optional, defaults to 8192) —
Dimension of the MLP representations. int
, optional, defaults to 16) —
Number of hidden layers in the Transformer decoder. int
, optional, defaults to 32) —
Number of attention heads for each attention layer in the Transformer decoder. int
, optional) —
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
num_key_value_heads=num_attention_heads
, the model will use Multi Head Attention (MHA), if
num_key_value_heads=1
the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout this
paper. If it is not specified, will default to
num_attention_heads
. str
or function
, optional, defaults to "silu"
) —
The non-linear activation function (function or string) in the decoder. int
, optional, defaults to 2048) —
The maximum sequence length that this model might ever be used with. float
, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. float
, optional, defaults to 1e-05) —
The epsilon used by the rms normalization layers. bool
, optional, defaults to True
) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True
. int
, optional) —
Padding token id. int
, optional, defaults to 1) —
Beginning of stream token id. int
, optional, defaults to 2) —
End of stream token id. bool
, optional, defaults to False
) —
Whether to tie weight embeddings float
, optional, defaults to 10000.0) —
The base period of the RoPE embeddings. Dict
, optional) —
Dictionary containing the scaling configuration for the RoPE embeddings. NOTE: if you apply new rope type
and you expect the model to work on longer max_position_embeddings
, we recommend you to update this value
accordingly.
Expected contents:
rope_type
(str
):
The sub-variant of RoPE to use. Can be one of [‘default’, ‘linear’, ‘dynamic’, ‘yarn’, ‘longrope’,
‘diffllama3’], with ‘default’ being the original RoPE implementation.
factor
(float
, optional):
Used with all rope types except ‘default’. The scaling factor to apply to the RoPE embeddings. In
most scaling types, a factor
of x will enable the model to handle sequences of length x
original maximum pre-trained length.
original_max_position_embeddings
(int
, optional):
Used with ‘dynamic’, ‘longrope’ and ‘diffllama3’. The original max position embeddings used during
pretraining.
attention_factor
(float
, optional):
Used with ‘yarn’ and ‘longrope’. The scaling factor to be applied on the attention
computation. If unspecified, it defaults to value recommended by the implementation, using the
factor
field to infer the suggested value.
beta_fast
(float
, optional):
Only used with ‘yarn’. Parameter to set the boundary for extrapolation (only) in the linear
ramp function. If unspecified, it defaults to 32.
beta_slow
(float
, optional):
Only used with ‘yarn’. Parameter to set the boundary for interpolation (only) in the linear
ramp function. If unspecified, it defaults to 1.
short_factor
(List[float]
, optional):
Only used with ‘longrope’. The scaling factor to be applied to short contexts (<
original_max_position_embeddings
). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
long_factor
(List[float]
, optional):
Only used with ‘longrope’. The scaling factor to be applied to long contexts (<
original_max_position_embeddings
). Must be a list of numbers with the same length as the hidden
size divided by the number of attention heads divided by 2
low_freq_factor
(float
, optional):
Only used with ‘diffllama3’. Scaling factor applied to low frequency components of the RoPE
high_freq_factor
(float
, optional*):
Only used with ‘diffllama3’. Scaling factor applied to high frequency components of the RoPE bool
, optional, defaults to False
) —
Whether to use a bias in the query, key, value and output projection layers during self-attention. float
, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities. float
, optional, defaults to 0.1) —
The standard deviation for initialization of parameter lambda in attention layer. int
, optional) —
The attention head dimension. If None, it will default to hidden_size // num_heads This is the configuration class to store the configuration of a DiffLlamaModel. It is used to instantiate an DiffLlama model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the kajuma/DiffLlama-0.3B-handcut.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
>>> from transformers import DiffLlamaModel, DiffLlamaConfig
>>> # Initializing a DiffLlama diffllama-7b style configuration
>>> configuration = DiffLlamaConfig()
>>> # Initializing a model from the diffllama-7b style configuration
>>> model = DiffLlamaModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
( config: DiffLlamaConfig )
Parameters
The bare DiffLlama Model outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
Transformer decoder consisting of config.num_hidden_layers layers. Each layer is a DiffLlamaDecoderLayer
( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[transformers.cache_utils.Cache] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None cache_position: typing.Optional[torch.LongTensor] = None **flash_attn_kwargs: typing_extensions.Unpack[transformers.modeling_flash_attention_utils.FlashAttentionKwargs] )
Parameters
torch.LongTensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
torch.Tensor
of shape (batch_size, sequence_length)
, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
If past_key_values
is used, optionally only the last input_ids
have to be input (see
past_key_values
).
If you want to change padding behavior, you should read modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]
.
Cache
or tuple(tuple(torch.FloatTensor))
, optional) —
Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used to speed up sequential decoding. This typically consists in the past_key_values
returned by the model at a previous stage of decoding, when use_cache=True
or config.use_cache=True
.
Two formats are allowed:
tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of
shape (batch_size, num_heads, sequence_length, embed_size_per_head)
). This is also known as the legacy
cache format.The model will output the same cache format that is fed as input. If no past_key_values
are passed, the
legacy cache format will be returned.
If past_key_values
are used, the user can optionally input only the last input_ids
(those that don’t
have their past key value states given to this model) of shape (batch_size, 1)
instead of all input_ids
of shape (batch_size, sequence_length)
.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Optionally, instead of passing input_ids
you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids
indices into associated vectors than the
model’s internal embedding lookup matrix. bool
, optional) —
If set to True
, past_key_values
key value states are returned and can be used to speed up decoding (see
past_key_values
). bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. torch.LongTensor
of shape (sequence_length)
, optional) —
Indices depicting the position of the input sequence tokens in the sequence. Contrarily to position_ids
,
this tensor is not affected by padding. It is used to update the cache in the correct position and to infer
the complete sequence length. The DiffLlamaModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Union[transformers.cache_utils.Cache, typing.List[torch.FloatTensor], NoneType] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None cache_position: typing.Optional[torch.LongTensor] = None num_logits_to_keep: int = 0 **kwargs: typing_extensions.Unpack[transformers.models.diffllama.modeling_diffllama.KwargsForCausalLM] ) → transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
Parameters
torch.LongTensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
torch.Tensor
of shape (batch_size, sequence_length)
, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
If past_key_values
is used, optionally only the last input_ids
have to be input (see
past_key_values
).
If you want to change padding behavior, you should read modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]
.
Cache
or tuple(tuple(torch.FloatTensor))
, optional) —
Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used to speed up sequential decoding. This typically consists in the past_key_values
returned by the model at a previous stage of decoding, when use_cache=True
or config.use_cache=True
.
Two formats are allowed:
tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of
shape (batch_size, num_heads, sequence_length, embed_size_per_head)
). This is also known as the legacy
cache format.The model will output the same cache format that is fed as input. If no past_key_values
are passed, the
legacy cache format will be returned.
If past_key_values
are used, the user can optionally input only the last input_ids
(those that don’t
have their past key value states given to this model) of shape (batch_size, 1)
instead of all input_ids
of shape (batch_size, sequence_length)
.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Optionally, instead of passing input_ids
you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids
indices into associated vectors than the
model’s internal embedding lookup matrix. bool
, optional) —
If set to True
, past_key_values
key value states are returned and can be used to speed up decoding (see
past_key_values
). bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. torch.LongTensor
of shape (sequence_length)
, optional) —
Indices depicting the position of the input sequence tokens in the sequence. Contrarily to position_ids
,
this tensor is not affected by padding. It is used to update the cache in the correct position and to infer
the complete sequence length. torch.LongTensor
of shape (batch_size, sequence_length)
, optional):
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size]
or -100 (see input_ids
docstring). Tokens with indices set to -100
are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
.
num_logits_to_keep (int
, optional):
Calculate logits for the last num_logits_to_keep
tokens. If 0
, calculate logits for all
input_ids
(special case). Only last token logits are needed for generation, and calculating them only for that
token can save memory, which becomes pretty significant for long sequences or large vocabulary size.
Returns
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (DiffLlamaConfig) and inputs.
loss (torch.FloatTensor
of shape (1,)
, optional, returned when labels
is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor
of shape (batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) — Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)
)
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The DiffLlamaForCausalLM forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, DiffLlamaForCausalLM
>>> model = DiffLlamaForCausalLM.from_pretrained("google/diffllama-7b")
>>> tokenizer = AutoTokenizer.from_pretrained("google/diffllama-7b")
>>> prompt = "What is your favorite condiment?"
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> # Generate
>>> generate_ids = model.generate(inputs.input_ids, max_length=30)
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
"What is your favorite condiment?"
( config )
Parameters
The DiffLlama Model transformer with a sequence classification head on top (linear layer).
DiffLlamaForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do.
Since it does classification on the last token, it requires to know the position of the last token. If a
pad_token_id
is defined in the configuration, it finds the last token that is not a padding token in each row. If
no pad_token_id
is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
padding tokens when inputs_embeds
are passed instead of input_ids
, it does the same (take the last value in
each row of the batch).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Union[transformers.cache_utils.Cache, typing.List[torch.FloatTensor], NoneType] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None )
Parameters
torch.LongTensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
torch.Tensor
of shape (batch_size, sequence_length)
, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
If past_key_values
is used, optionally only the last input_ids
have to be input (see
past_key_values
).
If you want to change padding behavior, you should read modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]
.
Cache
or tuple(tuple(torch.FloatTensor))
, optional) —
Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used to speed up sequential decoding. This typically consists in the past_key_values
returned by the model at a previous stage of decoding, when use_cache=True
or config.use_cache=True
.
Two formats are allowed:
tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of
shape (batch_size, num_heads, sequence_length, embed_size_per_head)
). This is also known as the legacy
cache format.The model will output the same cache format that is fed as input. If no past_key_values
are passed, the
legacy cache format will be returned.
If past_key_values
are used, the user can optionally input only the last input_ids
(those that don’t
have their past key value states given to this model) of shape (batch_size, 1)
instead of all input_ids
of shape (batch_size, sequence_length)
.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Optionally, instead of passing input_ids
you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids
indices into associated vectors than the
model’s internal embedding lookup matrix. bool
, optional) —
If set to True
, past_key_values
key value states are returned and can be used to speed up decoding (see
past_key_values
). bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. torch.LongTensor
of shape (sequence_length)
, optional) —
Indices depicting the position of the input sequence tokens in the sequence. Contrarily to position_ids
,
this tensor is not affected by padding. It is used to update the cache in the correct position and to infer
the complete sequence length. torch.LongTensor
of shape (batch_size,)
, optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]
. If config.num_labels == 1
a regression loss is computed (Mean-Square loss), If
config.num_labels > 1
a classification loss is computed (Cross-Entropy). The DiffLlamaForSequenceClassification forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
( config )
Parameters
The DiffLlama Model transformer with a span classification head on top for extractive question-answering tasks like
SQuAD (a linear layer on top of the hidden-states output to compute span start logits
and span end logits
).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Union[transformers.cache_utils.Cache, typing.List[torch.FloatTensor], NoneType] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None start_positions: typing.Optional[torch.LongTensor] = None end_positions: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None **kwargs )
Parameters
torch.LongTensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
torch.Tensor
of shape (batch_size, sequence_length)
, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
If past_key_values
is used, optionally only the last input_ids
have to be input (see
past_key_values
).
If you want to change padding behavior, you should read modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]
.
Cache
or tuple(tuple(torch.FloatTensor))
, optional) —
Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used to speed up sequential decoding. This typically consists in the past_key_values
returned by the model at a previous stage of decoding, when use_cache=True
or config.use_cache=True
.
Two formats are allowed:
tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of
shape (batch_size, num_heads, sequence_length, embed_size_per_head)
). This is also known as the legacy
cache format.The model will output the same cache format that is fed as input. If no past_key_values
are passed, the
legacy cache format will be returned.
If past_key_values
are used, the user can optionally input only the last input_ids
(those that don’t
have their past key value states given to this model) of shape (batch_size, 1)
instead of all input_ids
of shape (batch_size, sequence_length)
.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Optionally, instead of passing input_ids
you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids
indices into associated vectors than the
model’s internal embedding lookup matrix. bool
, optional) —
If set to True
, past_key_values
key value states are returned and can be used to speed up decoding (see
past_key_values
). bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. torch.LongTensor
of shape (sequence_length)
, optional) —
Indices depicting the position of the input sequence tokens in the sequence. Contrarily to position_ids
,
this tensor is not affected by padding. It is used to update the cache in the correct position and to infer
the complete sequence length. torch.LongTensor
of shape (batch_size,)
, optional) —
Labels for position (index) of the start of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length
). Position outside of the sequence
are not taken into account for computing the loss. torch.LongTensor
of shape (batch_size,)
, optional) —
Labels for position (index) of the end of the labelled span for computing the token classification loss.
Positions are clamped to the length of the sequence (sequence_length
). Position outside of the sequence
are not taken into account for computing the loss. The DiffLlamaForQuestionAnswering forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
( config )
Parameters
The DiffLlama Model transformer with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
torch.LongTensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
torch.Tensor
of shape (batch_size, sequence_length)
, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
If past_key_values
is used, optionally only the last input_ids
have to be input (see
past_key_values
).
If you want to change padding behavior, you should read modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]
.
Cache
or tuple(tuple(torch.FloatTensor))
, optional) —
Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used to speed up sequential decoding. This typically consists in the past_key_values
returned by the model at a previous stage of decoding, when use_cache=True
or config.use_cache=True
.
Two formats are allowed:
tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of
shape (batch_size, num_heads, sequence_length, embed_size_per_head)
). This is also known as the legacy
cache format.The model will output the same cache format that is fed as input. If no past_key_values
are passed, the
legacy cache format will be returned.
If past_key_values
are used, the user can optionally input only the last input_ids
(those that don’t
have their past key value states given to this model) of shape (batch_size, 1)
instead of all input_ids
of shape (batch_size, sequence_length)
.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Optionally, instead of passing input_ids
you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids
indices into associated vectors than the
model’s internal embedding lookup matrix. bool
, optional) —
If set to True
, past_key_values
key value states are returned and can be used to speed up decoding (see
past_key_values
). bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. torch.LongTensor
of shape (sequence_length)
, optional) —
Indices depicting the position of the input sequence tokens in the sequence. Contrarily to position_ids
,
this tensor is not affected by padding. It is used to update the cache in the correct position and to infer
the complete sequence length. torch.LongTensor
of shape (batch_size,)
, optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]
. If config.num_labels == 1
a regression loss is computed (Mean-Square loss), If
config.num_labels > 1
a classification loss is computed (Cross-Entropy). Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (DiffLlamaConfig) and inputs.
loss (torch.FloatTensor
of shape (1,)
, optional, returned when labels
is provided) — Classification loss.
logits (torch.FloatTensor
of shape (batch_size, sequence_length, config.num_labels)
) — Classification scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The DiffLlamaForTokenClassification forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, DiffLlamaForTokenClassification
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("kajuma/DiffLlama-0.3B-handcut")
>>> model = DiffLlamaForTokenClassification.from_pretrained("kajuma/DiffLlama-0.3B-handcut")
>>> inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_token_class_ids = logits.argmax(-1)
>>> # Note that tokens are classified rather then input words which means that
>>> # there might be more predicted token classes than words.
>>> # Multiple token classes might account for the same word
>>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
>>> labels = predicted_token_class_ids
>>> loss = model(**inputs, labels=labels).loss