ExecuTorch export provides a flexible configuration mechanism through dynamic registration, enabling users to have complete control over the export process. The configuration system is divided into task configurations and recipe configurations, each addressing specific aspects of the export pipeline.
Task configurations determine how a Hugging Face model should be loaded and prepared for export, tailored to specific tasks.
For instance, when exporting a model for a text generation task, the provided configuration utilizes static caching and SDPA (Scaled Dot-Product Attention) for inference optimization.
By leveraging task configurations, users can ensure that their models are appropriately prepared for efficient execution on the ExecuTorch backend.
Dynamically discovers and imports all task modules within the optimum.exporters.executorch.tasks
package.
Ensures tasks under ./tasks
directory are dynamically loaded without requiring manual imports.
Notes:
New tasks must be added to the ./tasks
directory to be discovered and used by main_export
.
Failure to do so will prevent dynamic discovery and registration. Tasks must also use the
@register_task
decorator to be properly registered in the task_registry
.
( task_name ) → Callable
Decorator to register a task under a specific name.
( model_name_or_path: str **kwargs ) → transformers.PreTrainedModel
Parameters
model_name_or_path="meta-llama/Llama-3.2-1B"
or mode_name_or_path="/path/to/model_folder
Returns
transformers.PreTrainedModel
An instance of a model subclass (e.g., Llama, Gemma) with the configuration for exporting and lowering to ExecuTorch.
Loads a causal language model for text generation and registers it under the task ‘text-generation’ using Hugging Face’s AutoModelForCausalLM.
Recipe configurations control the specifics of lowering an eager PyTorch module to the ExecuTorch backend. These configurations allow users to:
Dynamically discovers and imports all recipe modules within the optimum.exporters.executorch.recipes
package.
Ensures recipes under ./recipes
directory are dynamically loaded without requiring manual imports.
Notes:
New recipes must be added to the ./recipes
directory to be discovered and used by main_export
.
Failure to do so will prevent dynamic discovery and registration. Recipes must also use the
@register_recipe
decorator to be properly registered in the recipe_registry
.
( recipe_name ) → Callable
Decorator to register a recipe for exporting and lowering an ExecuTorch model under a specific name.
( model: typing.Union[transformers.modeling_utils.PreTrainedModel, transformers.integrations.executorch.TorchExportableModuleWithStaticCache] task: str **kwargs ) → ExecuTorchProgram
Parameters
Returns
ExecuTorchProgram
The exported and optimized program for ExecuTorch.
Export a PyTorch model to ExecuTorch w/ delegation to XNNPACK backend.
This function also write metadata required by the ExecuTorch runtime to the model.
The combination of task and recipe configurations ensures that users can customize both the high-level task setup and the low-level export details to suit their deployment requirements.
< > Update on GitHub