The OLMoE model was proposed in OLMoE: Open Mixture-of-Experts Language Models by Niklas Muennighoff, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Jacob Morrison, Sewon Min, Weijia Shi, Pete Walsh, Oyvind Tafjord, Nathan Lambert, Yuling Gu, Shane Arora, Akshita Bhagia, Dustin Schwenk, David Wadden, Alexander Wettig, Binyuan Hui, Tim Dettmers, Douwe Kiela, Ali Farhadi, Noah A. Smith, Pang Wei Koh, Amanpreet Singh, Hannaneh Hajishirzi.
OLMoE is a series of Open Language Models using sparse Mixture-of-Experts designed to enable the science of language models. We release all code, checkpoints, logs, and details involved in training these models.
The abstract from the paper is the following:
We introduce OLMoE, a fully open, state-of-the-art language model leveraging sparse Mixture-of-Experts (MoE). OLMoE-1B-7B has 7 billion (B) parameters but uses only 1B per input token. We pretrain it on 5 trillion tokens and further adapt it to create OLMoE-1B-7B-Instruct. Our models outperform all available models with similar active parameters, even surpassing larger ones like Llama2-13B-Chat and DeepSeekMoE-16B. We present various experiments on MoE training, analyze routing in our model showing high specialization, and open-source all aspects of our work: model weights, training data, code, and logs.
This model was contributed by Muennighoff. The original code can be found here.
( vocab_size = 50304 hidden_size = 2048 intermediate_size = 2048 num_hidden_layers = 16 num_attention_heads = 16 num_key_value_heads = None hidden_act = 'silu' max_position_embeddings = 4096 initializer_range = 0.02 rms_norm_eps = 1e-05 use_cache = True pad_token_id = 1 bos_token_id = None eos_token_id = 50279 tie_word_embeddings = False rope_theta = 10000.0 rope_scaling = None attention_bias = False attention_dropout = 0.0 clip_qkv = None num_experts_per_tok = 8 num_experts = 64 output_router_logits = False router_aux_loss_coef = 0.01 norm_topk_prob = False **kwargs )
Parameters
int
, optional, defaults to 50304) —
Vocabulary size of the OLMoE model. Defines the number of different tokens that can be represented by the
inputs_ids
passed when calling OlmoeModel int
, optional, defaults to 2048) —
Dimension of the hidden representations. int
, optional, defaults to 2048) —
Dimension of the MLP representations. int
, optional, defaults to 16) —
Number of hidden layers in the Transformer decoder. int
, optional, defaults to 16) —
Number of attention heads for each attention layer in the Transformer decoder. int
, optional) —
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
num_key_value_heads=num_attention_heads
, the model will use Multi Head Attention (MHA), if
num_key_value_heads=1
the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout this
paper. If it is not specified, will default to
num_attention_heads
. str
or function
, optional, defaults to "silu"
) —
The non-linear activation function (function or string) in the decoder. int
, optional, defaults to 4096) —
The maximum sequence length that this model might ever be used with. float
, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. float
, optional, defaults to 1e-05) —
The epsilon used by the rms normalization layers. bool
, optional, defaults to True
) —
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if config.is_decoder=True
. int
, optional, defaults to 1) —
Padding token id. int
, optional) —
Beginning of stream token id. int
, optional, defaults to 50279) —
End of stream token id. bool
, optional, defaults to False
) —
Whether to tie weight embeddings float
, optional, defaults to 10000.0) —
The base period of the RoPE embeddings. Dict
, optional) —
Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
{"type": strategy name, "factor": scaling factor}
. When using this flag, don’t update
max_position_embeddings
to the expected new maximum. See the following thread for more information on how
these scaling strategies behave:
https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
experimental feature, subject to breaking API changes in future versions. bool
, defaults to False
, optional, defaults to False
) —
Whether to use a bias in the query, key, value and output projection layers during self-attention. float
, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities. float
, optional) —
If not None
, elements of query, key and value attention states are clipped so that their
absolute value does not exceed this value. int
, optional, defaults to 8) —
Number of selected experts. int
, optional, defaults to 64) —
Number of routed experts. bool
, optional, defaults to False
) —
Whether or not the router logits should be returned by the model. Enabeling this will also
allow the model to output the auxiliary loss, including load balancing loss and router z-loss. float
, optional, defaults to 0.01) —
The aux loss factor for the total loss. bool
, optional, defaults to False
) —
Whether to normalize the topk probabilities. This is the configuration class to store the configuration of a OlmoeModel. It is used to instantiate an OLMoE model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the allenai/OLMoE-1B-7B-0924.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
>>> from transformers import OlmoeModel, OlmoeConfig
>>> # Initializing a OLMoE 7B A1B style configuration
>>> configuration = OlmoeConfig()
>>> # Initializing a model from the OLMoE 7B A1B style configuration
>>> model = OlmoeModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
( config: OlmoeConfig )
Parameters
The bare Olmoe Model outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
Transformer decoder consisting of config.num_hidden_layers layers. Each layer is a OlmoeDecoderLayer
( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Union[transformers.cache_utils.Cache, typing.List[torch.FloatTensor], NoneType] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None output_router_logits: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None cache_position: typing.Optional[torch.LongTensor] = None )
Parameters
torch.LongTensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
torch.Tensor
of shape (batch_size, sequence_length)
, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
If past_key_values
is used, optionally only the last input_ids
have to be input (see
past_key_values
).
If you want to change padding behavior, you should read modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]
.
Cache
or tuple(tuple(torch.FloatTensor))
, optional) —
Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used to speed up sequential decoding. This typically consists in the past_key_values
returned by the model at a previous stage of decoding, when use_cache=True
or config.use_cache=True
.
Two formats are allowed:
tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of
shape (batch_size, num_heads, sequence_length, embed_size_per_head)
). This is also known as the legacy
cache format.The model will output the same cache format that is fed as input. If no past_key_values
are passed, the
legacy cache format will be returned.
If past_key_values
are used, the user can optionally input only the last input_ids
(those that don’t
have their past key value states given to this model) of shape (batch_size, 1)
instead of all input_ids
of shape (batch_size, sequence_length)
.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Optionally, instead of passing input_ids
you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids
indices into associated vectors than the
model’s internal embedding lookup matrix. bool
, optional) —
If set to True
, past_key_values
key value states are returned and can be used to speed up decoding (see
past_key_values
). bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
Whether or not to return the logits of all the routers. They are useful for computing the router loss, and
should not be returned during inference. bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. torch.LongTensor
of shape (sequence_length)
, optional) —
Indices depicting the position of the input sequence tokens in the sequence. Contrarily to position_ids
,
this tensor is not affected by padding. It is used to update the cache in the correct position and to infer
the complete sequence length. The OlmoeModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None output_router_logits: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None cache_position: typing.Optional[torch.LongTensor] = None num_logits_to_keep: int = 0 **loss_kwargs ) → transformers.modeling_outputs.MoeCausalLMOutputWithPast
or tuple(torch.FloatTensor)
Parameters
torch.LongTensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
torch.Tensor
of shape (batch_size, sequence_length)
, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
If past_key_values
is used, optionally only the last input_ids
have to be input (see
past_key_values
).
If you want to change padding behavior, you should read modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more
information on the default strategy.
torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]
.
Cache
or tuple(tuple(torch.FloatTensor))
, optional) —
Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used to speed up sequential decoding. This typically consists in the past_key_values
returned by the model at a previous stage of decoding, when use_cache=True
or config.use_cache=True
.
Two formats are allowed:
tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of
shape (batch_size, num_heads, sequence_length, embed_size_per_head)
). This is also known as the legacy
cache format.The model will output the same cache format that is fed as input. If no past_key_values
are passed, the
legacy cache format will be returned.
If past_key_values
are used, the user can optionally input only the last input_ids
(those that don’t
have their past key value states given to this model) of shape (batch_size, 1)
instead of all input_ids
of shape (batch_size, sequence_length)
.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Optionally, instead of passing input_ids
you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids
indices into associated vectors than the
model’s internal embedding lookup matrix. bool
, optional) —
If set to True
, past_key_values
key value states are returned and can be used to speed up decoding (see
past_key_values
). bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
Whether or not to return the logits of all the routers. They are useful for computing the router loss, and
should not be returned during inference. bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. torch.LongTensor
of shape (sequence_length)
, optional) —
Indices depicting the position of the input sequence tokens in the sequence. Contrarily to position_ids
,
this tensor is not affected by padding. It is used to update the cache in the correct position and to infer
the complete sequence length. torch.LongTensor
of shape (batch_size, sequence_length)
, optional):
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size]
or -100 (see input_ids
docstring). Tokens with indices set to -100
are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
.
num_logits_to_keep (int
, optional):
Calculate logits for the last num_logits_to_keep
tokens. If 0
, calculate logits for all
input_ids
(special case). Only last token logits are needed for generation, and calculating them only for that
token can save memory, which becomes pretty significant for long sequences or large vocabulary size.
Returns
transformers.modeling_outputs.MoeCausalLMOutputWithPast
or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MoeCausalLMOutputWithPast
or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (OlmoeConfig) and inputs.
loss (torch.FloatTensor
of shape (1,)
, optional, returned when labels
is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor
of shape (batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
aux_loss (torch.FloatTensor
, optional, returned when labels
is provided) — aux_loss for the sparse modules.
router_logits (tuple(torch.FloatTensor)
, optional, returned when output_router_probs=True
and config.add_router_probs=True
is passed or when config.output_router_probs=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, sequence_length, num_experts)
.
Raw router logtis (post-softmax) that are computed by MoE routers, these terms are used to compute the auxiliary loss for Mixture of Experts models.
past_key_values (tuple(tuple(torch.FloatTensor))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) — Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)
)
Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values
input) to speed up sequential decoding.
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The OlmoeForCausalLM forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, OlmoeForCausalLM
>>> model = OlmoeForCausalLM.from_pretrained("allenai/OLMoE-1B-7B-0924")
>>> tokenizer = AutoTokenizer.from_pretrained("allenai/OLMoE-1B-7B-0924")
>>> prompt = "Hey, are you conscious? Can you talk to me?"
>>> inputs = tokenizer(prompt, return_tensors="pt")
>>> # Generate
>>> generate_ids = model.generate(inputs.input_ids, max_length=30)
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
'Hey, are you conscious? Can you talk to me?\nI’m not sure if you’re conscious of this, but I’m'