The NeuronBaseModel
class is available for instantiating a base Neuron model without a specific head.
It is used as the base class for all tasks but text generation.
( model: ScriptModule config: PretrainedConfig model_save_dir: Union = None model_file_name: Optional = None preprocessors: Optional = None neuron_config: Optional = None **kwargs )
Base class running compiled and optimized models on Neuron devices.
It implements generic methods for interacting with the Hugging Face Hub as well as compiling vanilla
transformers models to neuron-optimized TorchScript module and export it using optimum.exporters.neuron
toolchain.
Class attributes:
str
, optional, defaults to "neuron_model"
) — The name of the model type to use when
registering the NeuronBaseModel classes.Type
, optional, defaults to AutoModel
) — The AutoModel
class to be represented by the
current NeuronBaseModel class.Common attributes:
torch.jit._script.ScriptModule
) — The loaded ScriptModule
compiled for neuron devices.Path
) — The directory where a neuron compiled model is saved.
By default, if the loaded model is local, the directory where the original model will be used. Otherwise, the
cache directory will be used.Gets a dictionary of inputs with their valid static shapes.
( path: Union )
Loads a TorchScript module compiled by neuron(x)-cc compiler. It will be first loaded onto CPU and then moved to one or multiple NeuronCore.
( outputs: List dims: List indices: List )
Removes padding from output tensors.
The NeuronDecoderModel
class is the base class for text generation models.
( model: Module config: PretrainedConfig model_path: Union generation_config: Optional = None )
Base class to convert and run pre-trained transformers decoder models on Neuron devices.
It implements the methods to convert a pre-trained transformers decoder model into a Neuron transformer model by:
Common attributes:
torch.nn.Module
) — The decoder model with a graph optimized for neuron devices.generate()
.The following Neuron model classes are available for natural language processing tasks.
( model: ScriptModule config: PretrainedConfig model_save_dir: Union = None model_file_name: Optional = None preprocessors: Optional = None neuron_config: Optional = None **kwargs )
Parameters
transformers.PretrainedConfig
) — PretrainedConfig is the Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the optimum.neuron.modeling.NeuronBaseModel.from_pretrained
method to load the model weights.
torch.jit._script.ScriptModule
) — torch.jit._script.ScriptModule is the TorchScript graph compiled by neuron(x) compiler.
Neuron Model with a BaseModelOutput for feature-extraction tasks.
This model inherits from ~neuron.modeling.NeuronBaseModel
. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)
Feature Extraction model on Neuron devices.
( input_ids: Tensor attention_mask: Tensor token_type_ids: Optional = None **kwargs )
Parameters
torch.Tensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer
.
See PreTrainedTokenizer.encode
and
PreTrainedTokenizer.__call__
for details.
What are input IDs?
Union[torch.Tensor, None]
of shape (batch_size, sequence_length)
, defaults to None
) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:Union[torch.Tensor, None]
of shape (batch_size, sequence_length)
, defaults to None
) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]
:The NeuronModelForFeatureExtraction forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of feature extraction: (Following model is compiled with neuronx compiler and can only be run on INF2. Replace “neuronx” with “neuron” if you are using INF1.)
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForFeatureExtraction
>>> tokenizer = AutoTokenizer.from_pretrained("optimum/all-MiniLM-L6-v2-neuronx")
>>> model = NeuronModelForFeatureExtraction.from_pretrained("optimum/all-MiniLM-L6-v2-neuronx")
>>> inputs = tokenizer("Dear Evan Hansen is the winner of six Tony Awards.", return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> list(last_hidden_state.shape)
[1, 13, 384]
( model: ScriptModule config: PretrainedConfig model_save_dir: Union = None model_file_name: Optional = None preprocessors: Optional = None neuron_config: Optional = None **kwargs )
Parameters
transformers.PretrainedConfig
) — PretrainedConfig is the Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the optimum.neuron.modeling.NeuronBaseModel.from_pretrained
method to load the model weights.
torch.jit._script.ScriptModule
) — torch.jit._script.ScriptModule is the TorchScript graph compiled by neuron(x) compiler.
Neuron Model with a MaskedLMOutput for masked language modeling tasks.
This model inherits from ~neuron.modeling.NeuronBaseModel
. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)
Masked language model for on Neuron devices.
( input_ids: Tensor attention_mask: Tensor token_type_ids: Optional = None **kwargs )
Parameters
torch.Tensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer
.
See PreTrainedTokenizer.encode
and
PreTrainedTokenizer.__call__
for details.
What are input IDs?
Union[torch.Tensor, None]
of shape (batch_size, sequence_length)
, defaults to None
) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:Union[torch.Tensor, None]
of shape (batch_size, sequence_length)
, defaults to None
) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]
:The NeuronModelForMaskedLM forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of fill mask: (Following model is compiled with neuronx compiler and can only be run on INF2. Replace “neuronx” with “neuron” if you are using INF1.)
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForMaskedLM
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("optimum/legal-bert-base-uncased-neuronx")
>>> model = NeuronModelForMaskedLM.from_pretrained("optimum/legal-bert-base-uncased-neuronx")
>>> inputs = tokenizer("This [MASK] Agreement is between General Motors and John Murray.", return_tensors="pt")
>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)
[1, 13, 30522]
( model: ScriptModule config: PretrainedConfig model_save_dir: Union = None model_file_name: Optional = None preprocessors: Optional = None neuron_config: Optional = None **kwargs )
Parameters
transformers.PretrainedConfig
) — PretrainedConfig is the Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the optimum.neuron.modeling.NeuronBaseModel.from_pretrained
method to load the model weights.
torch.jit._script.ScriptModule
) — torch.jit._script.ScriptModule is the TorchScript graph compiled by neuron(x) compiler.
Neuron Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.
This model inherits from ~neuron.modeling.NeuronBaseModel
. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)
Sequence Classification model on Neuron devices.
( input_ids: Tensor attention_mask: Tensor token_type_ids: Optional = None **kwargs )
Parameters
torch.Tensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer
.
See PreTrainedTokenizer.encode
and
PreTrainedTokenizer.__call__
for details.
What are input IDs?
Union[torch.Tensor, None]
of shape (batch_size, sequence_length)
, defaults to None
) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:Union[torch.Tensor, None]
of shape (batch_size, sequence_length)
, defaults to None
) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]
:The NeuronModelForSequenceClassification forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification: (Following model is compiled with neuronx compiler and can only be run on INF2.)
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english-neuronx")
>>> model = NeuronModelForSequenceClassification.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english-neuronx")
>>> inputs = tokenizer("Hamilton is considered to be the best musical of human history.", return_tensors="pt")
>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)
[1, 2]
( model: ScriptModule config: PretrainedConfig model_save_dir: Union = None model_file_name: Optional = None preprocessors: Optional = None neuron_config: Optional = None **kwargs )
Parameters
transformers.PretrainedConfig
) — PretrainedConfig is the Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the optimum.neuron.modeling.NeuronBaseModel.from_pretrained
method to load the model weights.
torch.jit._script.ScriptModule
) — torch.jit._script.ScriptModule is the TorchScript graph compiled by neuron(x) compiler.
Neuron Model with a QuestionAnsweringModelOutput for extractive question-answering tasks like SQuAD.
This model inherits from ~neuron.modeling.NeuronBaseModel
. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)
Question Answering model on Neuron devices.
( input_ids: Tensor attention_mask: Tensor token_type_ids: Optional = None **kwargs )
Parameters
torch.Tensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer
.
See PreTrainedTokenizer.encode
and
PreTrainedTokenizer.__call__
for details.
What are input IDs?
Union[torch.Tensor, None]
of shape (batch_size, sequence_length)
, defaults to None
) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:Union[torch.Tensor, None]
of shape (batch_size, sequence_length)
, defaults to None
) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]
:The NeuronModelForQuestionAnswering forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of question answering: (Following model is compiled with neuronx compiler and can only be run on INF2.)
>>> import torch
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForQuestionAnswering
>>> tokenizer = AutoTokenizer.from_pretrained("optimum/roberta-base-squad2-neuronx")
>>> model = NeuronModelForQuestionAnswering.from_pretrained("optimum/roberta-base-squad2-neuronx")
>>> question, text = "Are there wheelchair spaces in the theatres?", "Yes, we have reserved wheelchair spaces with a good view."
>>> inputs = tokenizer(question, text, return_tensors="pt")
>>> start_positions = torch.tensor([1])
>>> end_positions = torch.tensor([12])
>>> outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
>>> start_scores = outputs.start_logits
>>> end_scores = outputs.end_logits
( model: ScriptModule config: PretrainedConfig model_save_dir: Union = None model_file_name: Optional = None preprocessors: Optional = None neuron_config: Optional = None **kwargs )
Parameters
transformers.PretrainedConfig
) — PretrainedConfig is the Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the optimum.neuron.modeling.NeuronBaseModel.from_pretrained
method to load the model weights.
torch.jit._script.ScriptModule
) — torch.jit._script.ScriptModule is the TorchScript graph compiled by neuron(x) compiler.
Neuron Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from ~neuron.modeling.NeuronBaseModel
. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)
Token Classification model on Neuron devices.
( input_ids: Tensor attention_mask: Tensor token_type_ids: Optional = None **kwargs )
Parameters
torch.Tensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer
.
See PreTrainedTokenizer.encode
and
PreTrainedTokenizer.__call__
for details.
What are input IDs?
Union[torch.Tensor, None]
of shape (batch_size, sequence_length)
, defaults to None
) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:Union[torch.Tensor, None]
of shape (batch_size, sequence_length)
, defaults to None
) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]
:The NeuronModelForTokenClassification forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of token classification: (Following model is compiled with neuronx compiler and can only be run on INF2.)
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForTokenClassification
>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-NER-neuronx")
>>> model = NeuronModelForTokenClassification.from_pretrained("optimum/bert-base-NER-neuronx")
>>> inputs = tokenizer("Lin-Manuel Miranda is an American songwriter, actor, singer, filmmaker, and playwright.", return_tensors="pt")
>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)
[1, 20, 9]
( model: ScriptModule config: PretrainedConfig model_save_dir: Union = None model_file_name: Optional = None preprocessors: Optional = None neuron_config: Optional = None **kwargs )
Parameters
transformers.PretrainedConfig
) — PretrainedConfig is the Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the optimum.neuron.modeling.NeuronBaseModel.from_pretrained
method to load the model weights.
torch.jit._script.ScriptModule
) — torch.jit._script.ScriptModule is the TorchScript graph compiled by neuron(x) compiler.
Neuron Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from ~neuron.modeling.NeuronBaseModel
. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)
Multiple choice model on Neuron devices.
( input_ids: Tensor attention_mask: Tensor token_type_ids: Optional = None **kwargs )
Parameters
torch.Tensor
of shape (batch_size, num_choices, sequence_length)
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer
.
See PreTrainedTokenizer.encode
and
PreTrainedTokenizer.__call__
for details.
What are input IDs?
Union[torch.Tensor, None]
of shape (batch_size, num_choices, sequence_length)
, defaults to None
) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:Union[torch.Tensor, None]
of shape (batch_size, num_choices, sequence_length)
, defaults to None
) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]
:The NeuronModelForMultipleChoice forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of mutliple choice: (Following model is compiled with neuronx compiler and can only be run on INF2.)
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForMultipleChoice
>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-uncased_SWAG-neuronx")
>>> model = NeuronModelForMultipleChoice.from_pretrained("optimum/bert-base-uncased_SWAG-neuronx", export=True)
>>> num_choices = 4
>>> first_sentence = ["Members of the procession walk down the street holding small horn brass instruments."] * num_choices
>>> second_sentence = [
... "A drum line passes by walking down the street playing their instruments.",
... "A drum line has heard approaching them.",
... "A drum line arrives and they're outside dancing and asleep.",
... "A drum line turns the lead singer watches the performance."
... ]
>>> inputs = tokenizer(first_sentence, second_sentence, truncation=True, padding=True)
# Unflatten the inputs values expanding it to the shape [batch_size, num_choices, seq_length]
>>> for k, v in inputs.items():
... inputs[k] = [v[i: i + num_choices] for i in range(0, len(v), num_choices)]
>>> inputs = dict(inputs.convert_to_tensors(tensor_type="pt"))
>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> logits.shape
[1, 4]
( model: Module config: PretrainedConfig model_path: Union generation_config: Optional = None )
Parameters
torch.nn.Module
) — torch.nn.Module is the neuron decoder graph.
transformers.PretrainedConfig
) — PretrainedConfig is the Model configuration class with all the parameters of the model.
Path
) — The directory where the compiled artifacts for the model are stored.
It can be a temporary directory if the model has never been saved locally before.
transformers.GenerationConfig
) — GenerationConfig holds the configuration for the model generation task.
Neuron model with a causal language modeling head for inference on Neuron devices.
This model inherits from ~neuron.modeling.NeuronDecoderModel
. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)
Returns True to validate the check made in GenerationMixin.generate()
.
( input_ids: Tensor cache_ids: Tensor start_ids: Tensor = None output_hidden_states: bool = False output_attentions: bool = False attention_mask: Tensor = None return_dict: bool = True )
Parameters
torch.LongTensor
) —
Indices of decoder input sequence tokens in the vocabulary of shape (batch_size, sequence_length)
.
torch.LongTensor
) — The indices at which the cached key and value for the current inputs need to be stored.
torch.LongTensor
) — The indices of the first tokens to be processed, deduced form the attention masks.
The NeuronModelForCausalLM forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of text generation:
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForCausalLM
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
>>> model = NeuronModelForCausalLM.from_pretrained("gpt2", export=True)
>>> inputs = tokenizer("My favorite moment of the day is", return_tensors="pt")
>>> gen_tokens = model.generate(**inputs, do_sample=True, temperature=0.9, min_length=20, max_length=20)
>>> tokenizer.batch_decode(gen_tokens)
(
input_ids: LongTensor
logits_processor: Optional = None
stopping_criteria: Optional = None
logits_warper: Optional = None
max_length: Optional = None
pad_token_id: Optional = None
eos_token_id: Union = None
output_attentions: Optional = None
output_hidden_states: Optional = None
output_scores: Optional = None
return_dict_in_generate: Optional = None
synced_gpus: bool = False
streamer: Optional = None
**model_kwargs
)
→
~generation.SampleDecoderOnlyOutput
, ~generation.SampleEncoderDecoderOutput
or torch.LongTensor
Parameters
torch.LongTensor
of shape (batch_size, sequence_length)
) —
The sequence used as a prompt for the generation.
LogitsProcessorList
, optional) —
An instance of LogitsProcessorList
. List of instances of class derived from LogitsProcessor
used to modify the prediction scores of the language modeling head applied at each generation step.
StoppingCriteriaList
, optional) —
An instance of StoppingCriteriaList
. List of instances of class derived from StoppingCriteria
used to tell if the generation loop should stop.
LogitsProcessorList
, optional) —
An instance of LogitsProcessorList
. List of instances of class derived from LogitsWarper
used
to warp the prediction score distribution of the language modeling head applied before multinomial
sampling at each generation step.
int
, optional) —
The id of the padding token.
Union[int, List[int]]
, optional) —
The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens.
bool
, optional, defaults to False
) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under
returned tensors for more details.
bool
, optional, defaults to False
) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors
for more details.
bool
, optional, defaults to False
) —
Whether or not to return the prediction scores. See scores
under returned tensors for more details.
bool
, optional, defaults to False
) —
Whether or not to return a ~utils.ModelOutput
instead of a plain tuple.
bool
, optional, defaults to False
) —
Whether to continue running the while loop until max_length (needed for ZeRO stage 3)
BaseStreamer
, optional) —
Streamer object that will be used to stream the generated sequences. Generated tokens are passed
through streamer.put(token_ids)
and the streamer is responsible for any further processing.
model_kwargs —
Additional model specific kwargs will be forwarded to the forward
function of the model. If model is
an encoder-decoder model the kwargs should include encoder_outputs
.
Returns
~generation.SampleDecoderOnlyOutput
, ~generation.SampleEncoderDecoderOutput
or torch.LongTensor
A torch.LongTensor
containing the generated tokens (default behaviour) or a
~generation.SampleDecoderOnlyOutput
if model.config.is_encoder_decoder=False
and
return_dict_in_generate=True
or a ~generation.SampleEncoderDecoderOutput
if
model.config.is_encoder_decoder=True
.
This is a simplified version of the transformers GenerationMixin.sample()
method that is optimized for neuron inference.
It generates sequences of token ids for models with a language modeling head using multinomial sampling and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.
Please refer to https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.GenerationMixin.sample.
( text_encoder: ScriptModule unet: ScriptModule vae_decoder: ScriptModule config: Dict tokenizer: CLIPTokenizer scheduler: Union feature_extractor: Optional = None device_ids: Optional = [] configs: Optional = None neuron_configs: Optional = None model_save_dir: Union = None model_and_config_save_paths: Optional = None )
( text_encoder: ScriptModule unet: ScriptModule vae_decoder: ScriptModule config: Dict tokenizer: CLIPTokenizer scheduler: Union feature_extractor: Optional = None device_ids: Optional = [] configs: Optional = None neuron_configs: Optional = None model_save_dir: Union = None model_and_config_save_paths: Optional = None )