Models

Generic model classes

NeuronBaseModel

The NeuronBaseModel class is available for instantiating a base Neuron model without a specific head. It is used as the base class for all tasks but text generation.

class optimum.neuron.NeuronBaseModel

< >

( model: ScriptModule config: PretrainedConfig model_save_dir: Union = None model_file_name: Optional = None preprocessors: Optional = None neuron_config: Optional = None **kwargs )

Base class running compiled and optimized models on Neuron devices.

It implements generic methods for interacting with the Hugging Face Hub as well as compiling vanilla transformers models to neuron-optimized TorchScript module and export it using optimum.exporters.neuron toolchain.

Class attributes:

Common attributes:

get_input_static_shapes

< >

( neuron_config: NeuronConfig )

Gets a dictionary of inputs with their valid static shapes.

load_model

< >

( path: Union )

Parameters

  • path (Union[str, Path]) — Path of the compiled model.

Loads a TorchScript module compiled by neuron(x)-cc compiler. It will be first loaded onto CPU and then moved to one or multiple NeuronCore.

remove_padding

< >

( outputs: List dims: List indices: List )

Parameters

  • outputs (List[torch.Tensor]) — List of torch tensors which are inference output.
  • dims (List[int]) — List of dimensions in which we slice a tensor.
  • indices (List[int]) — List of indices in which we slice a tensor along an axis.

Removes padding from output tensors.

NeuronDecoderModel

The NeuronDecoderModel class is the base class for text generation models.

class optimum.neuron.NeuronDecoderModel

< >

( model: Module config: PretrainedConfig model_path: Union generation_config: Optional = None )

Base class to convert and run pre-trained transformers decoder models on Neuron devices.

It implements the methods to convert a pre-trained transformers decoder model into a Neuron transformer model by:

Common attributes:

Natural Language Processing

The following Neuron model classes are available for natural language processing tasks.

NeuronModelForFeatureExtraction

class optimum.neuron.NeuronModelForFeatureExtraction

< >

( model: ScriptModule config: PretrainedConfig model_save_dir: Union = None model_file_name: Optional = None preprocessors: Optional = None neuron_config: Optional = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the optimum.neuron.modeling.NeuronBaseModel.from_pretrained method to load the model weights.
  • model (torch.jit._script.ScriptModule) — torch.jit._script.ScriptModule is the TorchScript graph compiled by neuron(x) compiler.

Neuron Model with a BaseModelOutput for feature-extraction tasks.

This model inherits from ~neuron.modeling.NeuronBaseModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Feature Extraction model on Neuron devices.

forward

< >

( input_ids: Tensor attention_mask: Tensor token_type_ids: Optional = None **kwargs )

Parameters

  • input_ids (torch.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode and PreTrainedTokenizer.__call__ for details. What are input IDs?
  • attention_mask (Union[torch.Tensor, None] of shape (batch_size, sequence_length), defaults to None) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
  • token_type_ids (Union[torch.Tensor, None] of shape (batch_size, sequence_length), defaults to None) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

The NeuronModelForFeatureExtraction forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of feature extraction: (Following model is compiled with neuronx compiler and can only be run on INF2. Replace “neuronx” with “neuron” if you are using INF1.)

>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForFeatureExtraction

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/all-MiniLM-L6-v2-neuronx")
>>> model = NeuronModelForFeatureExtraction.from_pretrained("optimum/all-MiniLM-L6-v2-neuronx")

>>> inputs = tokenizer("Dear Evan Hansen is the winner of six Tony Awards.", return_tensors="pt")

>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> list(last_hidden_state.shape)
[1, 13, 384]

NeuronModelForMaskedLM

class optimum.neuron.NeuronModelForMaskedLM

< >

( model: ScriptModule config: PretrainedConfig model_save_dir: Union = None model_file_name: Optional = None preprocessors: Optional = None neuron_config: Optional = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the optimum.neuron.modeling.NeuronBaseModel.from_pretrained method to load the model weights.
  • model (torch.jit._script.ScriptModule) — torch.jit._script.ScriptModule is the TorchScript graph compiled by neuron(x) compiler.

Neuron Model with a MaskedLMOutput for masked language modeling tasks.

This model inherits from ~neuron.modeling.NeuronBaseModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Masked language model for on Neuron devices.

forward

< >

( input_ids: Tensor attention_mask: Tensor token_type_ids: Optional = None **kwargs )

Parameters

  • input_ids (torch.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode and PreTrainedTokenizer.__call__ for details. What are input IDs?
  • attention_mask (Union[torch.Tensor, None] of shape (batch_size, sequence_length), defaults to None) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
  • token_type_ids (Union[torch.Tensor, None] of shape (batch_size, sequence_length), defaults to None) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

The NeuronModelForMaskedLM forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of fill mask: (Following model is compiled with neuronx compiler and can only be run on INF2. Replace “neuronx” with “neuron” if you are using INF1.)

>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForMaskedLM
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/legal-bert-base-uncased-neuronx")
>>> model = NeuronModelForMaskedLM.from_pretrained("optimum/legal-bert-base-uncased-neuronx")

>>> inputs = tokenizer("This [MASK] Agreement is between General Motors and John Murray.", return_tensors="pt")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)
[1, 13, 30522]

NeuronModelForSequenceClassification

class optimum.neuron.NeuronModelForSequenceClassification

< >

( model: ScriptModule config: PretrainedConfig model_save_dir: Union = None model_file_name: Optional = None preprocessors: Optional = None neuron_config: Optional = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the optimum.neuron.modeling.NeuronBaseModel.from_pretrained method to load the model weights.
  • model (torch.jit._script.ScriptModule) — torch.jit._script.ScriptModule is the TorchScript graph compiled by neuron(x) compiler.

Neuron Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.

This model inherits from ~neuron.modeling.NeuronBaseModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Sequence Classification model on Neuron devices.

forward

< >

( input_ids: Tensor attention_mask: Tensor token_type_ids: Optional = None **kwargs )

Parameters

  • input_ids (torch.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode and PreTrainedTokenizer.__call__ for details. What are input IDs?
  • attention_mask (Union[torch.Tensor, None] of shape (batch_size, sequence_length), defaults to None) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
  • token_type_ids (Union[torch.Tensor, None] of shape (batch_size, sequence_length), defaults to None) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

The NeuronModelForSequenceClassification forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of single-label classification: (Following model is compiled with neuronx compiler and can only be run on INF2.)

>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForSequenceClassification

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english-neuronx")
>>> model = NeuronModelForSequenceClassification.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english-neuronx")

>>> inputs = tokenizer("Hamilton is considered to be the best musical of human history.", return_tensors="pt")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)
[1, 2]

NeuronModelForQuestionAnswering

class optimum.neuron.NeuronModelForQuestionAnswering

< >

( model: ScriptModule config: PretrainedConfig model_save_dir: Union = None model_file_name: Optional = None preprocessors: Optional = None neuron_config: Optional = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the optimum.neuron.modeling.NeuronBaseModel.from_pretrained method to load the model weights.
  • model (torch.jit._script.ScriptModule) — torch.jit._script.ScriptModule is the TorchScript graph compiled by neuron(x) compiler.

Neuron Model with a QuestionAnsweringModelOutput for extractive question-answering tasks like SQuAD.

This model inherits from ~neuron.modeling.NeuronBaseModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Question Answering model on Neuron devices.

forward

< >

( input_ids: Tensor attention_mask: Tensor token_type_ids: Optional = None **kwargs )

Parameters

  • input_ids (torch.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode and PreTrainedTokenizer.__call__ for details. What are input IDs?
  • attention_mask (Union[torch.Tensor, None] of shape (batch_size, sequence_length), defaults to None) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
  • token_type_ids (Union[torch.Tensor, None] of shape (batch_size, sequence_length), defaults to None) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

The NeuronModelForQuestionAnswering forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of question answering: (Following model is compiled with neuronx compiler and can only be run on INF2.)

>>> import torch
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForQuestionAnswering

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/roberta-base-squad2-neuronx")
>>> model = NeuronModelForQuestionAnswering.from_pretrained("optimum/roberta-base-squad2-neuronx")

>>> question, text = "Are there wheelchair spaces in the theatres?", "Yes, we have reserved wheelchair spaces with a good view."
>>> inputs = tokenizer(question, text, return_tensors="pt")
>>> start_positions = torch.tensor([1])
>>> end_positions = torch.tensor([12])

>>> outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
>>> start_scores = outputs.start_logits
>>> end_scores = outputs.end_logits

NeuronModelForTokenClassification

class optimum.neuron.NeuronModelForTokenClassification

< >

( model: ScriptModule config: PretrainedConfig model_save_dir: Union = None model_file_name: Optional = None preprocessors: Optional = None neuron_config: Optional = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the optimum.neuron.modeling.NeuronBaseModel.from_pretrained method to load the model weights.
  • model (torch.jit._script.ScriptModule) — torch.jit._script.ScriptModule is the TorchScript graph compiled by neuron(x) compiler.

Neuron Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.

This model inherits from ~neuron.modeling.NeuronBaseModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Token Classification model on Neuron devices.

forward

< >

( input_ids: Tensor attention_mask: Tensor token_type_ids: Optional = None **kwargs )

Parameters

  • input_ids (torch.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode and PreTrainedTokenizer.__call__ for details. What are input IDs?
  • attention_mask (Union[torch.Tensor, None] of shape (batch_size, sequence_length), defaults to None) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
  • token_type_ids (Union[torch.Tensor, None] of shape (batch_size, sequence_length), defaults to None) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

The NeuronModelForTokenClassification forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of token classification: (Following model is compiled with neuronx compiler and can only be run on INF2.)

>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForTokenClassification

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-NER-neuronx")
>>> model = NeuronModelForTokenClassification.from_pretrained("optimum/bert-base-NER-neuronx")

>>> inputs = tokenizer("Lin-Manuel Miranda is an American songwriter, actor, singer, filmmaker, and playwright.", return_tensors="pt")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)
[1, 20, 9]

NeuronModelForMultipleChoice

class optimum.neuron.NeuronModelForMultipleChoice

< >

( model: ScriptModule config: PretrainedConfig model_save_dir: Union = None model_file_name: Optional = None preprocessors: Optional = None neuron_config: Optional = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the optimum.neuron.modeling.NeuronBaseModel.from_pretrained method to load the model weights.
  • model (torch.jit._script.ScriptModule) — torch.jit._script.ScriptModule is the TorchScript graph compiled by neuron(x) compiler.

Neuron Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.

This model inherits from ~neuron.modeling.NeuronBaseModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Multiple choice model on Neuron devices.

forward

< >

( input_ids: Tensor attention_mask: Tensor token_type_ids: Optional = None **kwargs )

Parameters

  • input_ids (torch.Tensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode and PreTrainedTokenizer.__call__ for details. What are input IDs?
  • attention_mask (Union[torch.Tensor, None] of shape (batch_size, num_choices, sequence_length), defaults to None) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
  • token_type_ids (Union[torch.Tensor, None] of shape (batch_size, num_choices, sequence_length), defaults to None) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

The NeuronModelForMultipleChoice forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of mutliple choice: (Following model is compiled with neuronx compiler and can only be run on INF2.)

>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForMultipleChoice

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-uncased_SWAG-neuronx")
>>> model = NeuronModelForMultipleChoice.from_pretrained("optimum/bert-base-uncased_SWAG-neuronx", export=True)

>>> num_choices = 4
>>> first_sentence = ["Members of the procession walk down the street holding small horn brass instruments."] * num_choices
>>> second_sentence = [
...     "A drum line passes by walking down the street playing their instruments.",
...     "A drum line has heard approaching them.",
...     "A drum line arrives and they're outside dancing and asleep.",
...     "A drum line turns the lead singer watches the performance."
... ]
>>> inputs = tokenizer(first_sentence, second_sentence, truncation=True, padding=True)

# Unflatten the inputs values expanding it to the shape [batch_size, num_choices, seq_length]
>>> for k, v in inputs.items():
...     inputs[k] = [v[i: i + num_choices] for i in range(0, len(v), num_choices)]
>>> inputs = dict(inputs.convert_to_tensors(tensor_type="pt"))
>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> logits.shape
[1, 4]

NeuronModelForCausalLM

class optimum.neuron.NeuronModelForCausalLM

< >

( model: Module config: PretrainedConfig model_path: Union generation_config: Optional = None )

Parameters

  • model (torch.nn.Module) — torch.nn.Module is the neuron decoder graph.
  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model.
  • model_path (Path) — The directory where the compiled artifacts for the model are stored. It can be a temporary directory if the model has never been saved locally before.
  • generation_config (transformers.GenerationConfig) — GenerationConfig holds the configuration for the model generation task.

Neuron model with a causal language modeling head for inference on Neuron devices.

This model inherits from ~neuron.modeling.NeuronDecoderModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

can_generate

< >

( )

Returns True to validate the check made in GenerationMixin.generate().

forward

< >

( input_ids: Tensor cache_ids: Tensor start_ids: Tensor = None return_dict: bool = True )

Parameters

  • input_ids (torch.LongTensor) — Indices of decoder input sequence tokens in the vocabulary of shape (batch_size, sequence_length).
  • cache_ids (torch.LongTensor) — The indices at which the cached key and value for the current inputs need to be stored.
  • start_ids (torch.LongTensor) — The indices of the first tokens to be processed, deduced form the attention masks.

The NeuronModelForCausalLM forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of text generation:

>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForCausalLM
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
>>> model = NeuronModelForCausalLM.from_pretrained("gpt2", export=True)

>>> inputs = tokenizer("My favorite moment of the day is", return_tensors="pt")

>>> gen_tokens = model.generate(**inputs, do_sample=True, temperature=0.9, min_length=20, max_length=20)
>>> tokenizer.batch_decode(gen_tokens)

generate

< >

( input_ids: Tensor attention_mask: Optional = None generation_config: Optional = None **kwargs ) torch.Tensor

Parameters

  • input_ids (torch.Tensor of shape (batch_size, sequence_length)) — The sequence used as a prompt for the generation.
  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices.
  • generation_config (~transformers.generation.GenerationConfig, optional) — The generation configuration to be used as base parametrization for the generation call. **kwargs passed to generate matching the attributes of generation_config will override them. If generation_config is not provided, default will be used, which had the following loading priority: 1) from the generation_config.json model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit GenerationConfig’s default values, whose documentation should be checked to parameterize generation.

Returns

torch.Tensor

A torch.FloatTensor.

A streamlined generate() method overriding the transformers.GenerationMixin.generate() method.

This method uses the same logits processors/warpers and stopping criterias as the transformers library generate() method but restricts the generation to greedy search and sampling.

It does not support transformers generate() advanced options.

Please refer to https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.GenerationMixin.generate for details on generation configuration.

generate_tokens

< >

( input_ids: LongTensor eos_token_id: int pad_token_id: int logits_processor: LogitsProcessorList stopping_criteria: StoppingCriteriaList do_sample: bool logits_warper: Optional = None attention_mask: Optional = None **model_kwargs ) torch.LongTensor

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — The sequence used as a prompt for the generation.
  • eos_token_id (int) — The id of the end-of-sequence token.
  • pad_token_id (int) — The id of the padding token.
  • logits_processor (LogitsProcessorList) — An instance of LogitsProcessorList. List of instances of class derived from LogitsProcessor used to modify the prediction scores of the language modeling head applied at each generation step.
  • stopping_criteria (StoppingCriteriaList) — An instance of StoppingCriteriaList. List of instances of class derived from StoppingCriteria used to tell if the generation loop should stop.
  • do_sample (bool) — Sample new tokens or simply takes the one with the highest score.
  • logits_warper (LogitsProcessorList, optional) — An instance of LogitsProcessorList. List of instances of class derived from LogitsWarper used to warp the prediction score distribution of the language modeling head applied before multinomial sampling at each generation step.
  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. model_kwargs — Additional model specific kwargs will be forwarded to the forward function of the model.

Returns

torch.LongTensor

A torch.LongTensor containing the generated tokens.

Generate tokens using sampling or greedy search.

greedy_search

< >

( input_ids: LongTensor eos_token_id: int pad_token_id: int logits_processor: LogitsProcessorList stopping_criteria: StoppingCriteriaList attention_mask: Optional = None **model_kwargs ) torch.LongTensor

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — The sequence used as a prompt for the generation.
  • eos_token_id (int) — The id of the end-of-sequence token.
  • pad_token_id (int) — The id of the padding token.
  • logits_processor (LogitsProcessorList) — An instance of LogitsProcessorList. List of instances of class derived from LogitsProcessor used to modify the prediction scores of the language modeling head applied at each generation step.
  • stopping_criteria (StoppingCriteriaList) — An instance of StoppingCriteriaList. List of instances of class derived from StoppingCriteria used to tell if the generation loop should stop.
  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. model_kwargs — Additional model specific kwargs will be forwarded to the forward function of the model.

Returns

torch.LongTensor

A torch.LongTensor containing the generated tokens.

This is a simplified version of the transformers GenerationMixin.greedy_search() method that is optimized for neuron inference.

Please refer to https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.GenerationMixin.greedy_search.

sample

< >

( input_ids: LongTensor eos_token_id: int pad_token_id: int logits_processor: LogitsProcessorList stopping_criteria: StoppingCriteriaList logits_warper: LogitsProcessorList attention_mask: Optional = None **model_kwargs ) torch.LongTensor

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — The sequence used as a prompt for the generation.
  • eos_token_id (int) — The id of the end-of-sequence token.
  • pad_token_id (int) — The id of the padding token.
  • logits_processor (LogitsProcessorList) — An instance of LogitsProcessorList. List of instances of class derived from LogitsProcessor used to modify the prediction scores of the language modeling head applied at each generation step.
  • stopping_criteria (StoppingCriteriaList) — An instance of StoppingCriteriaList. List of instances of class derived from StoppingCriteria used to tell if the generation loop should stop.
  • logits_warper (LogitsProcessorList) — An instance of LogitsProcessorList. List of instances of class derived from LogitsWarper used to warp the prediction score distribution of the language modeling head applied before multinomial sampling at each generation step.
  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. model_kwargs — Additional model specific kwargs will be forwarded to the forward function of the model.

Returns

torch.LongTensor

A torch.LongTensor containing the generated tokens.

This is a simplified version of the transformers GenerationMixin.sample() method that is optimized for neuron inference.

It generates sequences of token ids for models with a language modeling head using multinomial sampling and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models.

Please refer to https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.GenerationMixin.sample.

Stable Diffusion

NeuronStableDiffusionPipelineBase

class optimum.neuron.modeling_diffusion.NeuronStableDiffusionPipelineBase

< >

( text_encoder: ScriptModule unet: ScriptModule vae_encoder: ScriptModule vae_decoder: ScriptModule config: Dict tokenizer: CLIPTokenizer scheduler: Union feature_extractor: Optional = None device_ids: Optional = None configs: Optional = None neuron_configs: Optional = None model_save_dir: Union = None model_and_config_save_paths: Optional = None )

NeuronStableDiffusionPipeline

class optimum.neuron.NeuronStableDiffusionPipeline

< >

( text_encoder: ScriptModule unet: ScriptModule vae_encoder: ScriptModule vae_decoder: ScriptModule config: Dict tokenizer: CLIPTokenizer scheduler: Union feature_extractor: Optional = None device_ids: Optional = None configs: Optional = None neuron_configs: Optional = None model_save_dir: Union = None model_and_config_save_paths: Optional = None )