Exporting a PyTorch model to Neuron model is as simple as
optimum-cli export neuron \
--model bert-base-uncased \
--sequence_length 128 \
--batch_size 1 \
bert_neuron/
Check out the help for more options:
optimum-cli export neuron --help
AWS provides two generations of the Inferentia accelerator built for machine learning inference with higher throughput, lower latency but lower cost: inf2 (NeuronCore-v2) and inf1 (NeuronCore-v1).
In production environments, to deploy 🤗 Transformers models on Neuron devices, you need to compile your models and export them to a serialized format before inference. Through Ahead-Of-Time (AOT) compilation with Neuron Compiler( neuronx-cc or neuron-cc ), your models will be converted to serialized and optimized TorchScript modules.
NEFF: Neuron Executable File Format which is a binary executable on Neuron devices.
Although pre-compilation avoids overhead during the inference, traced Neuron module has some limitations:
In this guide, we’ll show you how to export your models to serialized models optimized for Neuron devices.
🤗 Optimum provides support for the Neuron export by leveraging configuration objects. These configuration objects come ready made for a number of model architectures, and are designed to be easily extendable to other architectures.
To check the supported architectures, go to the configuration reference page.
To export a 🤗 Transformers model to Neuron, you’ll first need to install some extra dependencies:
For Inf2
pip install optimum[neuronx]
For Inf1
pip install optimum[neuron]
The Optimum Neuron export can be used through Optimum command-line:
optimum-cli export neuron --help
usage: optimum-cli export neuron [-h] -m MODEL [--task TASK] [--atol ATOL] [--cache_dir CACHE_DIR] [--trust-remote-code]
[--auto_cast {none,matmul,all}] [--auto_cast_type {bf16,fp16,mixed,tf32}]
[--disable-fast-relayout] [--disable-fallback] [--dynamic-batch-size] [--batch_size BATCH_SIZE]
[--sequence_length SEQUENCE_LENGTH] [--num_choices NUM_CHOICES]
output
optional arguments:
-h, --help show this help message and exit
Required arguments:
-m MODEL, --model MODEL
Model ID on huggingface.co or path on disk to load model from.
output Path indicating the directory where to store generated Neuron compiled TorchScript model.
Optional arguments:
--task TASK The task to export the model for. If not specified, the task will be auto-inferred based on the model.
Available tasks depend on the model, but are among: ['conversational', 'feature-extraction', 'fill-
mask', 'text-generation', 'text2text-generation', 'text-classification', 'token-classification',
'multiple-choice', 'object-detection', 'question-answering', 'image-classification', 'image-
segmentation', 'masked-im', 'semantic-segmentation', 'automatic-speech-recognition', 'audio-
classification', 'audio-frame-classification', 'audio-xvector', 'image-to-text', 'stable-diffusion',
'zero-shot-image-classification', 'zero-shot-object-detection'].
--atol ATOL If specified, the absolute difference tolerance when validating the model. Otherwise, the default atol
for the model will be used.
--cache_dir CACHE_DIR
Path indicating where to store cache.
--trust-remote-code Allow to use custom code for the modeling hosted in the model repository. This option should only be set
for repositories you trust and in which you have read the code, as it will execute on your local machine
arbitrary code present in the model repository.
--auto_cast {none,matmul,all}
Whether to cast operations from FP32 to lower precision to speed up the inference. Can be `"none"`,
`"matmul"` or `"all"`.
--auto_cast_type {bf16,fp16,mixed,tf32}
The data type to cast FP32 operations to when auto-cast mode is enabled. Can be `"bf16"`, `"fp16"`,
`"mixed"` or `"tf32"`.
--disable-fast-relayout
Whether to disable fast relayout optimization which improves performance by using the matrix multiplier
for tensor transpose. (inf1 only)
--disable-fallback Whether to disable CPU partitioning to force operations to Neuron. Defaults to `False`, as without
fallback, there could be some compilation failures or performance problems. (inf1 only)
--dynamic-batch-size Enable dynamic batch size for neuron compiled model. If this option is enabled, the input batch size can
be dynamic during the inference, but it comes with a potential tradeoff in terms of latency.
Input shapes:
--batch_size BATCH_SIZE
Batch size that the Neuron-cc compiler exported model will be able to take as input.
--sequence_length SEQUENCE_LENGTH
Sequence length that the Neuron-cc compiler exported model will be able to take as input.
--num_choices NUM_CHOICES
Only for the multiple-choice task. Num choices that the Neuron-cc compiler exported model will be able
to take as input.
In the last section, you can see some input shape options to pass for exporting static neuron model, meaning that exact shape inputs should be used during the inference as given during compilation. If you are going to use variable-size inputs, you can pad your inputs to the shape used for compilation as a workaround. If you want the batch size to be dynamic, you can pass --dynamic-batch-size
to enable dynamic batching, which means that you will be able to use inputs with difference batch size during inference, but it comes with a potential tradeoff in terms of latency.
Exporting a checkpoint can be done as follows:
optimum-cli export neuron --model distilbert-base-uncased-distilled-squad --batch_size 1 --sequence_length 16 distilbert_base_uncased_squad_neuron/
You should see the following logs which validate the model on Neuron deivces by comparing with PyTorch model on CPU:
Validating Neuron model...
-[✓] Neuron model output names match reference model (last_hidden_state)
- Validating Neuron Model output "last_hidden_state":
-[✓] (1, 16, 32) matches (1, 16, 32)
-[✓] all values close (atol: 0.0001)
The Neuronx export succeeded and the exported model was saved at: distilbert_base_uncased_squad_neuron/
This exports a neuron-compiled TorchScript module of the checkpoint defined by the --model
argument.
As you can see, the task was automatically detected. This was possible because the model was on the Hub. For local models, providing the --task
argument is needed or it will default to the model architecture without any task specific head:
optimum-cli export neuron --model local_path --task question-answering --batch_size 1 --sequence_length 16 --dynamic-batch-size distilbert_base_uncased_squad_neuron/
Note that providing the --task
argument for a model on the Hub will disable the automatic task detection. The resulting model.neuron
file, can then be loaded and run on Neuron devices.
NeuronModelForXXX
APIs
You will also be able to export your models to Neuron format with optimum.neuron.NeuronModelForXXX
model classes. Here is an example:
>>> from optimum.neuron import NeuronModelForSequenceClassification
>>> input_shapes = {"batch_size": 1, "sequence_length": 64} # mandatory shapes
>>> model = NeuronModelForSequenceClassification.from_pretrained(
... "distilbert-base-uncased-finetuned-sst-2-english", export=True, **input_shapes
... )
# Save the model
>>> model.save_pretrained("./distilbert-base-uncased-finetuned-sst-2-english_neuron/")
And the exported model can be used for inference directly with the NeuronModelForXXX
class:
>>> from transformers import AutoTokenizer
>>> from optimum.neuron import NeuronModelForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("./distilbert-base-uncased-finetuned-sst-2-english_neuron/")
>>> model = NeuronModelForSequenceClassification.from_pretrained("./distilbert-base-uncased-finetuned-sst-2-english_neuron/")
>>> inputs = tokenizer("Hamilton is considered to be the best musical of human history.", return_tensors="pt")
>>> logits = model(**inputs).logits
>>> print(model.config.id2label[logits.argmax().item()])
'POSITIVE'
Specifying a --task
should not be necessary in most cases when exporting from a model on the Hugging Face Hub.
However, in case you need to check for a given a model architecture what tasks the Neuron export supports, we got you covered. First, you can check the list of supported tasks here.
For each model architecture, you can find the list of supported tasks via the ~exporters.tasks.TasksManager
. For example, for DistilBERT, for the Neuron export, we have:
>>> from optimum.exporters.tasks import TasksManager
>>> from optimum.exporters.neuron.model_configs import * # Register neuron specific configs to the TasksManager
>>> distilbert_tasks = list(TasksManager.get_supported_tasks_for_model_type("distilbert", "neuron").keys())
>>> print(distilbert_tasks)
['feature-extraction', 'fill-mask', 'multiple-choice', 'question-answering', 'text-classification', 'token-classification']
You can then pass one of these tasks to the --task
argument in the optimum-cli export neuron
command, as mentioned above.