Inferentia Exporter

You can export a PyTorch model to Neuron compiled model with 🤗 Optimum to run inference on AWS Inferntia 1 and Inferentia 2. There is an export function for each generation of the Inferentia accelerator, export_neuron for INF1 and export_neuronx on INF2, but you will be able to use directly the export function export, which will select the proper exporting function according to the environment. Besides, you can check if the exported model’s performance is valid via validate_model_outputs, to compare the compiled model on neuron devices to the PyTorch model on CPU.